Automatic evaluation of pressure sore status by combining information obtained from high-frequency ultrasound and digital photography Computers in Biology and Medicine 41 (2011) 427–434 Contents lists available at ScienceDirect Computers in Biology and Medicine 0010-48 doi:10.1 n Corr E-m miranbm journal homepage: www.elsevier.com/locate/cbm Automatic evaluation of pressure sore status by combining information obtained from high-frequency ultrasound and digital photography Sahar Moghimi a, Mohammad Hossein Miran Baygi b,n, Giti Torkaman c a Department of Electrical Engineering, Ferdowsi University of Mashhad, Mashhad, Iran b Department of Electrical Computer Engineering, Tarbiat Modares University, Tehran, Iran c Department of Physical Therapy, Tarbiat Modares University, Tehran, Iran a r t i c l e i n f o Article history: Received 27 February 2010 Accepted 8 March 2011 Keywords: Digital color images Sonographic assessment Color histogram Feature extraction Image processing Fuzzy integral Neural networks Pressure sore Guinea pigs 25/$ - see front matter Crown Copyright & 2 016/j.compbiomed.2011.03.020 esponding author. ail addresses: s.moghimi@ferdowsi.u.ac.ir (S. h@modares.ac.ir (M.H. Miran Baygi). a b s t r a c t In this study, the different phases of pressure sore generation and healing are investigated through a combined analysis of high-frequency ultrasound (20 MHz) images and digital color photographs. Pressure sores were artificially induced in guinea pigs, and the injured regions were monitored for 21 days (data were obtained on days 3, 7, 14, and 21). Several statistical features of the images were extracted, relating to both the altering pattern of tissue and its superficial appearance. The features were grouped into five independent categories, and each category was used to train a neural network whose outputs were the four days. The outputs of the five classifiers were then fused using a fuzzy integral to provide the final decision. We demonstrate that the suggested method provides a better decision regarding tissue status than using either imaging technique separately. This new approach may be a viable tool for detecting the phases of pressure sore generation and healing in clinical settings. Crown Copyright & 2011 Published by Elsevier Ltd. All rights reserved. 1. Introduction Pressure sores are a major health care issue, specifically in patients with impaired mobility or a reduced ability to sense injury. Unrelieved pressure on tissue causes ischemia, which if prolonged may result in necrotic tissue and pressure sore forma- tion [1]. Accurate and reliable techniques for assessing the extent and severity of pressure sores can be very helpful in evaluating treatment strategies, and even permit early diagnosis of pressure sore generation. The most popular assessment tools among clinicians consist of an acetate sheet and a non-allergic liquid for measuring wound perimeter and volume [2]. Computed tomography (CT), magnetic resonance imaging (MRI), digital photography, and high-frequency ultrasound (HFU) have all been studied for the assessment of pressure sores. CT and MRI are not economical, so cannot be employed in small offices and clinics, and the final test results are not immediately available. In addition, they expose the patient to X-rays, injected dyes and/or magnetic fields [3]. Digital photography is a simple and practical way to record a wound’s appearance. A sequence of digital images can reveal valuable information such as changes in 011 Published by Elsevier Ltd. All Moghimi), the wound’s dimensions and colors [4–9], but using this techni- que one cannot detect the full extent of tissue damage in cases where the tissue is undermined [4]. Therefore, by utilizing this technique alone we may sometimes fail to differentiate phases of pressure sore generation and healing. HFU provides a means of studying the structure of undermined tissue [10]. However, interpreting the data obtained from these systems requires intensive training. HFU has been employed for evaluating wounds, including burn scars, surgery wounds, and pressure ulcers, which has been shown to be comparable or better at wound assessment than the aforementioned methods [11–17]. Rippon et al. [12] used artificially induced, acute wounds in pigs to demonstrate that HFU and histology are comparable in their ability to reveal the dominant wound healing parameters (e.g., wound depth, eschar/ blood clot depth, collagen accumulation, and granulation tissue depth). Dyson et al. [14] argued that high-frequency ultrasound may measure the wound region more effectively than photogra- phy. Wendelken et al. [3] monitored wounds by filling the cavity with a sterile mapping gel and dressing the area with film. The dimensions of the wound were then measured using a software package. A study of the early stages of pressure ulcer generation revealed that HFU can detect subdermal tissue and skin edema before any clinical or visual signs of skin breakdown appear [16]. Changes in tissue regularity and homogeneity during the healing process have also been observed using HFU [12–14]. However, rights reserved. www.elsevier.com/locate/cbm dx.doi.org/10.1016/j.compbiomed.2011.03.020 mailto:s.moghimi@ferdowsi.u.ac.ir mailto:miranbmh@modares.ac.ir dx.doi.org/10.1016/j.compbiomed.2011.03.020 S. Moghimi et al. / Computers in Biology and Medicine 41 (2011) 427–434428 quantitative measurements for monitoring the above healing parameters with HFU are limited. These measurements include the co-occurrence matrix, explained in detail by Theodoridis et al. [18] and used to analyze the echographic structures of skin and liver tissues [19–21]; and randomly weighted frequency compo- nents of the intensity values, used to calculate the frequency band energy in the region of interest (ROI) as a measure of echogenicity [17]. In a recent work, the generation and healing of pressure sores were assessed quantitatively by extracting relevant para- meters from the HFU images [22]. However, it was demonstrated that this technique may fail to discriminate between some phases of pressure sore generation and healing. Analysis of the color content of digital photographs has several applications including the assessment of skin tumors, erythema, and wounds [8,9,23–33]. Several successful attempts have been made to classify different types of wounds or tissues present in the wound bed (i.e., granulation, necrotic or slut) by clustering or segmenting various sets of features (textures and/or statistics extracted from RGB, HSV, HIS, LAB, and LUV histograms) [26–34]. Many researchers have also studied algorithms for determining the wound area and volume automatically [5,35–39]. Some researchers have recently tried to evaluate the 3D geometry of wounds using color images [6,7,39]. The color content of digital images has also been employed to assess healing in acute wounds and pressure sores [8,9,40]. Since photography is only useful when visual signs of the wounds are apparent, these studies have naturally focused on the appearance of the tissue. The literature survey demonstrated that HFU and digital photography have both been repeatedly used for assessment of pressure sores. Digital photography is a practical and low-cost imaging technique, which provides valuable information about the appearance of the tissue under study. However, changes in the deeper layers of tissue, including those in the early stages of pressure sore generation, cannot be investigated using this technique. HFU on the other hand has been utilized for studying deeper tissue damage which is not visually recognizable. This technique seems to be a good candidate for investigating the undermined tissue, considering its lower cost in comparison with the other imaging techniques (e.g. CT and MRI) and potential to be used in small offices or clinics. The objective of this study is to combine the information obtained by HFU and digital photography to assess the process of pressure sore generation and healing automatically. Each of these imaging techniques provide valuable information about certain aspects of pressure sore generation and healing [4–17]. Therefore, by combining the decision of experts, trained separately by the information obtained from each of the two imaging techniques, the goal of this study is to provide more accurate decisions about the phases of pressure sore generation and healing than what could have been achieved from employing either imaging tech- nique separately. Fig. 1. Region of interest (ROI) selection. EE, entry echo; D, dermis; FT, fatty tissue. 2. Materials and methods 2.1. Animal model for pressure sore generation A guinea pig model was developed for inducing and monitoring pressure sores. The system used for this research and our process for generating pressure sores are fully described in a previous publication [22]. Briefly, pressure was uniformly applied to a 0.75-cm diameter disk over the trochanter region of 28 animal’s hind limbs, using a computer-controlled surface pressure delivery system. The load was kept constant at 40075 g for 5 h. The same region was then monitored over 21 days (measurements taken on days 3, 7, 14, and 21 after pressure sore generation). The reason for choosing the above days was that pressure sores, induced in this manner have been reported to reach their maximum severity after seven days [41,42]. It has also been stated that before this time, the actual extent of necrosis is difficult to define, and that after seven days healing reverses some of the more obvious signs of tissue damage [8]. The 21-day period therefore monitors both sore generation and healing phases. The precise intervals were selected during the experiment to allow changes in tissue structure and skin appearance to manifest. Later, we will consider the day of measurement an unknown class to be determined from the HFU and imaging data. The Ethical Commission of Tarbiat Modares University approved this study. 2.2. High frequency ultrasound imaging The sore region was monitored using a 20 MHz B-mode ultrasound scanner (DUB_USB taberna pro medicum, Luneburg, Germany) in a controlled environment. The 20 MHz scanner could monitor up to a �8 mm depth of tissue, but we deliberately reduced the size of the imaging window as no significant echo- graphic structure was observed below the muscle fascia [22]. The scan region was marked on the skin of each animal to avoid confusion concerning probe head positioning. This enabled us to preserve the angle between the probe head and body line. Animals were kept in a restrainer during image acquisition to avoid motion artifacts. To reduce processing time and avoid noise at the probe edges, a 1.5�2 mm2 window (225�300 pixels) under the superficial layers was selected as the region of interest (ROI) for image analysis (Fig. 1). 2.3. Digital color imaging A 10 Megapixel Canon digital camera was used to obtain color photographs. In order to control lighting conditions and maintain a constant distance between the camera and tissue, we built a cylindrical Teflon box to support the camera. A 1.5�1.5 cm2 square hole was cut in the bottom of the box, taking into account the dimensions of the induced pressure sores. Sixteen LEDs were set in the box, as illustrated in Fig. 2. A diffusing surface was placed in front of the LEDs in order to provide a uniform light source. Since the color images were obtained on different days, factors other than physiological changes may have affected the tissue appearance (e.g., color changes due to camera noise or oscillations in the power supply). To account for this possibility, five elements of the Macbeth chart (white, blue, light skin, red, and green) were mounted around the box hole. The color values of these elements in the photographs were used to calibrate the images as follows. First, an arithmetic mean filter (Eq. (1)) was applied to reduce the Gaussian noise observed in each channel (red, green, blue) Fig. 2. Schematic of the photograph acquisition box. S. Moghimi et al. / Computers in Biology and Medicine 41 (2011) 427–434 429 of the Macbeth elements. The filter mask has the following formulation: f̂ðx,yÞ¼ 1 mn X ðs,tÞA Sxy gðs,tÞ ð1Þ where g and f̂ are the original and restored images. m and n represent the dimensions of the neighborhood window Sxy cen- tered at (x,y). Next, we attempt to bring the RGB values of the images closer to their true values, using the white Macbeth element as a reference. The correction is expressed by the following equation: ðR0,G0,B0Þ¼ðR � Rref =Rw,G � Gref =Gw,B � Bref =BwÞ ð2Þ where R0, G0 and B0 are corrected values and R, G and B are the original values. Rref, Gref and Bref represent the red, green and blue values of the white Macbeth chart. Rw, Gw and Bw are the mean color values of the white reference in the captured image. One last calibration step was performed to restore the color balance. This step is inspired by the calibration algorithm devel- oped by Herbin et al. [43]. The white reference is achromatic, but in the captured images its red, green, and blue values differ due to the spectral distribution of the light source and the properties of the camera sensor [43]. Denoting by MaxðRw,Gw,BwÞ the max- imum of the mean red, green, and blue values for the white reference image, we define another set of multiplicative coeffi- cients: MaxðRw,Gw,BwÞ=Rw, MaxðRw,Gw,BwÞ=Gw, and MaxðRw, Gw,BwÞ=Bw. The final red, green, and blue values of the pixels are then ðR0�MaxðRw,Gw,BwÞ=Rw,G0�MaxðRw,Gw,BwÞ=Gw,B0�MaxðRw,Gw,BwÞ=BwÞ ð3Þ For example, the true RGB values of the white Macbeth chart are (243,242,243), whereas the mean RGB values of the white reference in a captured image were (212,216,221). In the calibrated image (after applying Eq. 3), the mean values changed to (242,244,243). Before calibration, the standard deviations of the RGB channels within the blue, light skin, red, and green Macbeth charts were (8.7,8.1,8.8), (5.3,4.8,5.5), (7.7,7.1,7.9), and (8.7,8.0,8.9), respectively. After calibration, the standard deviations changed to (3.1,2.0,3.6), (2.0,1.7,2.9), (2.1,2.1,3.4), and (3.1,1.9,3.5). After calibration, a 1�1 cm2 region (1000�1000 pixels) from the center of the box window was selected for further processing. 2.4. Extracting features 2.4.1. High-frequency ultrasound The echogenicity of regions in a B-scan indicates the amount of acoustic energy reflected by the corresponding tissue regions. The degree of echogenicity of the corium depends on the amount of collagen fiber material per unit volume; any change in this amount results in acoustic property changes. Collagenous fiber bundles appear as band-like, moderately or highly reflective structures [10]. Since the different structures, present in the generation and healing of pressure sores, exhibit different attenuation properties, changes in the echogenicity may be assumed to follow meaningful patterns. Reflections from the reticular dermis include hyperechoic lines oriented parallel to the skin surface (Fig. 1). These echoes are relevant to the presence of collagen fibers. In contrast, because of the low echogenicity of the cellular infiltrate, the granulation tissue appears as a hypoe- choic region. As the granulation tissue matures, fibroblasts synthesize fibrous extracellular matrix proteins (including hyper- echoic collagen) [12]. These phenomena result in changes in the intensity and textural content of the ROI. Five relevant features are extracted from the ultrasound image to evaluate echogenicity and tissue structure within the ROI. Before calculating these features, the gray levels of the HFU image were reduced from 256 to 64. 2.4.1.1. Statistical parameters. Three statistical parameters, namely the angular second moment (ASM), contrast (CON), and correlation (COR) [18], were extracted from the co-occurrence matrix of the ROI. These parameters were selected for their ability to model texture. 2.4.1.2. Fractal signature. Changes in the properties of a picture related to changes in scale are a sign of fractal properties [44–46]. The fractal surface area may be calculated from the gray levels of an image [45]. A technique introduced by Mandelbrot [41] can be used to compute the area of the fractal surface at any given scale. The slope of the line resulting from plotting fractal surface area versus scale is known as the fractal signature [46]. The fractal signature contains important information about the fineness of variations in the gray level surface, with no need to decompose the image into harmonics (i.e., a Fourier transform) or wavelets [45]. In previous works we developed a modified fractal signature (MFS), defined as the slope of the line resulting from plotting the upper blanket surface against scale on a log–log basis [22,47]. In HFU images, the MFS measures the echo distribution in the ROI and therefore serves as a texture descriptor. The MFS is more suitable than the original fractal signature for extracting features from HFU scans [47]. 2.4.1.3. Echogenicity. All four of the features discussed above are texture descriptors, and therefore good candidates for monitoring changes in the echographic structure of the ROI. The following feature is included to measure changes in ROI echogenicity as a result of pressure sore generation and healing. Since in the ROI collagen fibers are visualized as hyperechoic bands parallel to the skin surface, a mask with the following structure was applied to the ROI after reducing the gray levels from 256 to 64: F ¼ �1 0 1 �2 0 2 �1 0 1 2 64 3 75 ð4Þ This anisotropic mask F may be used to implement the gradient operator along the horizontal direction, thereby high- lighting the presence of vertical lines in the ROI. A weight value of 2 was used to add a smoothing effect [18]. After filtering the ROI by F, the root mean square (RMS) of the pixel values was computed as a measure of the ROI echogenicity (referred to as RE) (Fig. 3). Fig. 3. High-frequency ultrasound scans of the ROI before (day 7, a; day 21, d) and after (day 7, b; day 21, e) the gray levels were reduced from 256 to 64. Also shows are the filtered images (day 7, c; day 21, f). The vertical lines in images c and f are highlighted. S. Moghimi et al. / Computers in Biology and Medicine 41 (2011) 427–434430 The features extracted from the HFU images were normalized to the interval [0,1]. Fig. 4. The 2D hue-saturation histogram of a digital photograph obtained from a pressure sore. 2.4.2. Color features In this research, color features were also used to differentiate the stages of pressure sore generation and healing. The RGB system is not adequate for this purpose, since its components are strongly dependent on the ambient light intensity. Instead, we choose the HSI (hue-saturation-intensity) color model for model- ing the ROI. Since our main objective is to investigate color changes in the altering tissue, the intensity component is neglected. Rather than segmenting the sore region or comparing the superimposed images on different days, we extracted time variant color features over the whole ROI. First we computed a 50�50 bin (2D) histogram of the ROI, its axes representing the hue and saturation components. Each image presents a number of peaks in the 2D histogram with different positions, areas, and intensities. In our experiments, we observed that these features evolve according to a meaningful pattern during the generation and healing of pressure sores. First, the connected regions in each histogram were extracted. Second, we recorded the centroid of each region and its corre- sponding image area (in pixels). The sum of the pixel intensities in each region was also computed. The first and second elements of the centroid are the horizontal and vertical coordinates. In order to reduce the computational burden, only the two largest con- nected regions in each histogram were considered for feature extraction. Thus, we have six color features for each image histogram: the centroids (CEN), areas (AR), and intensity sums (INT) of two connected regions. Fig. 4 shows an example 2D histogram, its two major regions, and the corresponding feature values. All color features except for those representing the location of a centroid were normalized to the interval [0,1]. 2.5. Classification The extracted features were grouped into five categories. The first consists of ASM, CON, and COR. The second includes MFS and RE. The features CEN, AR, and INT were placed into three individual categories. Five Multi-Layer Perceptron Neural Networks (MLP NNs) were trained, one on each category of features. Each network had the same structure, and used a tan-sigmoid transfer function. The weights and biases of the networks were initialized with random values taken from the interval [�1,1]. The weights between neurons were updated only after all training examples had been exposed to the network (batch training). Levenberg–Marquardt back-propagation was found to produce the best results. The network structure has two hidden layers, with eight and three neurons, respectively. The number of hidden layers and neurons were set based on some tests and trials carried out prior to the main training process. The output layer has four neurons, repre- senting soft support for the four different image classes (day 3, day 7, day 14, and day 21). The outputs were then fused using a fuzzy integral in order to determine the final decision of the ensemble. Each network is referred to as a ‘classifier’ henceforth. 2.6. Combining the decision of neural networks with fuzzy integral The problem in simultaneous application of the HFU and digital photography is that the credibility of their vote in deter- mining the phase of pressure sore generation and healing varies with time. Among the decision fusion techniques used for con- tinuous value outputs, class indifferent combiners, e.g. decision templates and Dempster–Shafer, are not appropriate candidates for combining the decision of the classifiers in the present experiment, since the combination strategy for determining the final support of each class must be different. Class-conscious combiners are classified into two categories: trainable and non- trainable combiners [52]. Here we seek a trainable combiner whose parameters and therefore combining paradigm are set based on the data presented to the system. Our choice of combination technique is the fuzzy integral (FI), which has been successfully employed in several different con- texts [48–50]. Here, we review only the basic concepts. A more detailed description of the FI algorithm can be found in Refs. [50,51]. S. Moghimi et al. / Computers in Biology and Medicine 41 (2011) 427–434 431 The fuzzy measure g is a set function for a finite set of elements X, which satisfies the following conditions: (1) AR INT CEN Fig. the p Fig. �0.5 echo gðfÞ¼0 (2) gðXÞ¼ 1 (3) gðA [ BÞ¼gðAÞþgðBÞþlgðAÞgðBÞ, for all A,B� X, A \ B¼f, for some l4�1. The fuzzy measure g can be evaluated from a set of L values gi known as fuzzy densities. The latter may be interpreted as the importance of individual classifiers to the final evaluation. The parameter l is the only root of the following polynomial that is greater than �1: lþ1¼ YL i ¼ 1 ð1þlgiÞ la0 ð5Þ The FI is a nonlinear function and can be regarded as an aggregation operator. This operator is defined over X with respect to g Z X hðxÞ3gð:Þ¼sup½min½a,gðfx9hðxÞZagÞ�� ð6Þ where h : X-½0,1� denotes the confidence value delivered by elements of X (e.g. class membership of data determined by a specific classifier). By sorting the values of h(.) in a descending order, hðx1ÞZhðx2ÞZ. . .ZhðxnÞ, the Sugeno FI can be evaluated asZ X hðxÞ3gð:Þ¼ max n i ¼ 1 ½min½hðxiÞ,gðAiÞ�� ð7Þ Classifier 1 Classifier 2 Classifier 3 Classifier 4 Classifier 5 Output classFI Fuzzy measures ASM, CON, COR MFS, RE (regions 1 and 2) (regions 1 and 2) (regions 1 and 2) 5. A schematic of the ensemble developed for providing the final decision on hase of pressure sores, using five neural networks and fuzzy integral. 6. Digital photographs obtained from an induced pressure sore, from days 3 (a) mm. Ultrasound scans obtained from days 3 (e), 7 (f), 14 (g), and 21 (h). White arr . The distances from A to An and from A to Ann are �1.5 and �2.5 mm, respective where Ai, i¼1,2,y,n, is a partition of the set X. g(Ai) can be calculated recursively by assuming g(1)¼g1 and gðAiÞ¼g i þgðAi�1Þþlg igðAi�1Þ 1oirn ð8Þ Here, the gis were taken to be the accuracy of each classifier evaluated during the training process. Fig. 5 illustrates the ensemble provided for the objective of this research. 2.7. Performance evaluation The Leave One Out technique, which is mainly used to evaluate the performance of classifiers [18], was adopted to validate the performance of the designed ensemble. At each step, one of the 28 samples was left out for testing purposes and the five neural networks were trained for the features extracted from the remaining 27 samples. The performance of the ensemble was evaluated as the mean accuracy on the 27 test iterations. 3. Results The digital photographs and HFU images obtained from one animal on different days are presented in Fig. 6. Images 6(a)–(d) were captured using the apparatus illustrated in Fig. 2. As is apparent in Fig. 6(b), the pressure sore reaches its maximum severity at the surface on day 7. The hypoechoic regions visible in Fig. 6(e) may be related to blood pools or edema. Note that the hyperechoic band at the boundary of the dermis and fatty tissue disappears in day 7, but reappears on day 21. Table 1 displays the significance of the features as determined by the analysis of variance (ANOVA) and multiple comparison tests. The confidence intervals in brackets indicate whether the means of two days are statistically different. Note that day 3 and day 21 cannot be distinguished on the basis of these features, as all five intervals include zero. Features extracted from the two largest connected regions of the 2D histogram were evaluated by the same tests (Tables 2 and 3). The confidence intervals suggest that day 3 could not be distinguished from day 7, and that day 14 could not be distinguished from day 21. Tables 1–3 demonstrate that while the features extracted from different imaging techniques (digital photography and HFU) fail to discriminate some classes, the classes presenting difficulties are not the same for the two techniques. Therefore, it may be assumed that by combining the decisions made by each of the classifiers, trained on the extracted features, a better conclusion can be obtained about the phase of pressure sore generation and healing. , 7 (b), 14 (c), and 21 (d). CE, calibration element. The distances from B to Bn is ows on day 21 indicate the muscle fascia (MF). FT, fatty tissue; D, dermis; EE, entry ly. Table 1 The degree of class separation in HFU features studied through one-way ANOVA and multiple comparison tests. MD is the mean difference, and CI is the 95% confidence interval. Days CON COR ASM RE MFS Po0.05 (P¼1.3�10�21) Po0.05 (P¼2.79�10�11) Po0.05 (P¼2.8�10�24) Po0.05 (P¼1.4�10�29) Po0.05 (P¼1.02�10�29) MD (%95CI) MD (%95CI) MD (%95CI) MD (%95CI) MD (%95CI) Day3/Day7 0.53 [0.41, 0.64] �0.29 [�0.41, �0.17] �0.36 [�0.46, �0.27] 0.35 [0.26, 0.45] �0.46 [�0.57, �0.35] Day3/Day14 0.19 [0.08, 0.31] �0.20 [�0.32, �0.08] �0.11 [�0.20, �0.01] 0.10 [0.01,0.20] �0.12 [�0.23, �0.00] Day3/Day21 �0.01 [�0.12, 0.10] 0.01 [�0.10, 0.13] 0.03 [�0.07, 0.12] �0.01[�0.10, 0.09] 0.05 [�0.07, 0.16] Day7/Day14 �0.33 [�0.44, �0.22] 0.09 [�0.03, 0.21] 0.26 [0.16, 0.35] �0.25 [�0.35, �0.16] 0.35 [0.23, 0.46] Day7/Day21 �0.54 [�0.65, �0.43] 0.30 [0.19, 0.42] 0.39 [0.29, 0.49] �0.36 [�0.45, �0.26] 0.51 [0.39, 0.62] Day14/Day21 �0.20 [�0.32, �0.09] 0.21 [0.09, 0.33] 0.13 [0.04, 0.23] �0.11 [�0.21, �0.01] 0.16 [0.05, 0.27] Table 2 The degree of class separation in features of the hue-saturation color histogram studied through one-way ANOVA and multiple comparison tests. These features belong to the largest connected region in the 2D histogram. The first and second columns of CEN are the horizontal and vertical coordinates of the region’s centroid, respectively. MD is the difference, and CI is the confidence interval. Days AR INT CEN Po0.05 (P¼2.23�10�14) Po0.05 (P¼2.64�10�24) Po0.05 (P¼4�10�7) Po0.05 (P¼4.32�10�6) MD (%95CI) MD (%95CI) MD (%95CI) Day3/Day7 0.06 [�0.02, 0.14] 0.07 [�0.02, 0.16] �0.44 [�2.22, 1.33] �0.59 [�1.47,0.29] Day3/Day14 0.21 [0.13, 0.29] �0.23 [�0.32, �0.13] 2.41 [0.63,4.18] 0.73 [�0.15,1.61] Day3/Day21 0.25 [0.17, 0.33] �0.28 [�0.37, �0.19] 2.98 [1.20,4.75] 1.15 [0.27, 2.03] Day7/Day14 0.15 [0.07, 0.23] �0.29 [�0.39, �0.20] 2.85 [1.07,4.62] 1.32 [0.44, 2.20] Day7/Day21 0.19 [0.11, 0.27] �0.35 [�0.44, �0.26] 3.42 [1.64,5.19] 1.74 [0.86,2.62] Day14/Day21 0.04 [�0.04, 0.12] �0.05 [�0.15, 0.04] 0.57 [�1.20,2.34] 0.42 [�0.46,1.30] Table 3 The degree of class separation in features of the hue-saturation color histogram, studied through one-way ANOVA and multiple comparison tests. These features belong to the second largest connected region in the 2D histogram. The first and second columns of CEN are the horizontal and vertical coordinates of the region’s centroid, respectively. MD is the difference between the two means, and CI is the confidence interval. Days AR INT CEN Po0.05 (P¼1.01�10�26) Po0.05 (P¼1.51�10�5) Po0.05 (P¼1.62�10�27) Po0.05 (P¼3.11�10�10) MD (%95CI) MD (%95CI) MD (%95CI) Day3/Day7 �0.14 [�0.19, �0.08] �0.21 [�0.32, �0.10] �3.75 [�5.64, �1.85] �0.35 [�0.66, �0.03] Day3/Day14 0.12 [0.07, 0.17] �0.06 [�0.18, 0.05] 4.21 [2.31, 6.10] 0.34 [0.02, 0.65] Day3/Day21 0.14 [0.08, 0.19] �0.02 [�0.14, 0.09] 4.37 [2.47, 6.26] 0.50 [0.19, 0.81] Day7/Day14 0.25 [0.20, 0.31] 0.15 [0.03, 0.26] 7.95 [6.06, 9.85] 0.68 [0.37, 0.10] Day7/Day21 0.27 [0.22, 0.32] 0.19 [0.07, 0.30] 8.12 [6.22, 10.01] 0.84 [0.53, 1.16] Day14/Day21 0.01 [�0.04, 0.07] 0.04 [�0.07, 0.15] 0.16 [�1.73, 2.06] 0.16 [�0.15, 0.47] Table 4 Confusion matrix for the test results of classifier 1 in Fig. 5. Classified as Day 3 Classified as Day 7 Classified as Day 14 Classified as Day 21 Day 3 11 0 4 13 Day 7 1 21 2 0 Day 14 5 7 19 5 Day 21 11 0 3 10 Total (%) 39.3 75.0 67.9 35.7 54.5 Table 5 Confusion matrix for the test results of classifier 2 in Fig. 5. Classified as Day 3 Classified as Day 7 Classified as Day 14 Classified as Day 21 Day 3 10 4 4 11 Day 7 1 21 2 0 Day 14 5 3 17 5 Day 21 12 0 5 12 Total (%) 35.7 75.0 60.7 42.9 53.6 S. Moghimi et al. / Computers in Biology and Medicine 41 (2011) 427–434432 Tables 4–8 present the confusion matrices generated by classifiers 1 through 5, respectively. For classifiers trained on features extracted from the HFU images, the highest percentages of misclassified samples occur when comparing days 3 and 21. For classifiers trained on digital photography features, the highest percentages of misclassified samples occur when comparing day 14 and day 21. Table 9 presents the confusion matrix obtained after combin- ing the results with the FI method. The improvements in classi- fication accuracy are noticeable. Table 6 Confusion matrix for the test results of classifier 3 in Fig. 5. Classified as Day 3 Classified as Day 7 Classified as Day 14 Classified as Day 21 Day 3 25 2 0 0 Day 7 3 26 1 0 Day 14 0 0 10 12 Day 21 0 0 17 16 Total (%) 89.3 92.9 35.7 57.1 68.8 Table 7 Confusion matrix for the test results of classifier 4 in Fig. 5. Classified as Day 3 Classified as Day 7 Classified as Day 14 Classified as Day 21 Day 3 25 1 0 0 Day 7 1 25 5 2 Day 14 1 1 11 8 Day 21 1 1 12 18 Total (%) 89.3 89.3 39.3 64.3 70.5 Table 8 Confusion matrix for the test results of classifier 5 in Fig. 5. Classified as Day 3 Classified as Day 7 Classified as Day 14 Classified as Day 21 Day 3 20 1 3 4 Day 7 4 27 1 0 Day 14 3 0 17 7 Day 21 1 0 7 17 Total (%) 71.4 96.4 60.7 60.7 72.3 Table 9 Confusion matrix for the test results after combining all five classifiers by the FI method. Classified as Day 3 Classified as Day 7 Classified as Day 14 Classified as Day 21 Day 3 23 0 1 2 Day 7 3 28 1 1 Day 14 2 0 23 7 Day 21 0 0 3 18 Total (%) 82.1 100 82.1 64.3 82.1 S. Moghimi et al. / Computers in Biology and Medicine 41 (2011) 427–434 433 4. Discussion Both HFU and digital photography have been previously used for the assessment of pressure sores. Our method is specifically designed to improve the recognition of pressure sore generation and healing phases by using the two imaging techniques simul- taneously. In this way, both tissue structure and skin appearance are taken into account. The generation and healing of pressure sores involves the presence of cells, matrices and complex structures. The hypoechoic regions visible in the HFU scans on day 3 (Fig. 6(a)) were assumed to be caused by edema and blood pools. In the corresponding digital photograph, only a red area was observed. Redness is considered an early sign of pressure sore generation [4]. On day 7 hypoechoic regions dominate the HFU scan, signaling the appearance of granulation tissue (cellular infiltrate has low echo- genicity) [12,13]. As Fig. 6(f) shows, the superficial layers also attain maximum severity on day 7. As healing proceeds, the granulation tissue matures and fibroblasts synthesize fibrous extracellular matrix proteins (including hyperechoic collagen) [12]. These events result in the appearance of hyperechoic regions in the HFU scans from day 14 onward. Meanwhile, the digital photographs illustrate superficial signs of tissue healing. The features extracted from the HFU images were selected based on their ability to monitor the echogenicity and texture of the ROI. Table 1 shows that these features are generally good at discriminating the four days, with the exception of comparing day 3 to day 21. This is due to the resemblance of the obtained scans from these two days. Since the objective of this research is to create an automatic system capable of recognizing phases of pressure sore generation and healing, this drawback could cause problems. However, Tables 2 and 3 illustrate that these days are separable based on features extracted from a digital photograph color histogram. Since the pairs that the two techniques fail to discri- minate do not overlap, simultaneously applying both techniques achieves a better recognition rate. These statements are supported statistically by the confusion rates presented in Tables 4–9. Fusing the five outputs has improved both the total accuracy and the misclassification rate. In this study the four classes represent different phases of pressure sore generation and healing. Therefore, it can be sug- gested that the developed method may be capable of detecting the phases of pressure sore generation and healing, provided that the HFU scan and digital photograph of the suspicious region are available. Our study has several limitations. First, given the ethical considerations of small animal use, our hypothesis could not be tested on more severe or non-healing sores. Second, pressure sores in humans do not necessarily follow the same generation and healing patterns. Therefore, a large human database is required to define the recognizable phases of pressure sore generation and healing. Researchers have considered clinical parameters (e.g. pain, swelling, and itching) in assessment of wounds [53]. Considering these symptoms (by means of rating tools) next to the extracted parameters in future human studies can result in a better assessment of pressure sore status. In the present study, the monitored region was marked on the body of the animal and precautions were taken when positioning the probe head and imaging box. Our database therefore consists of images obtained from the same region over time, and of corresponding regions in different animals. Drawing conclusions about sore generation or healing from scans and images obtained from various subject and areas of body is a much more difficult task. Future clinical research on applying the proposed method to human skin will have to deal with the intrinsic variance of images obtained from unknown parts of the body. 5. Conclusion In this study, HFU scans and digital photographs were obtained from pressure sores induced in guinea pigs. Several relevant statistical features were extracted from the obtained images. A set of five neural networks was trained on five separate categories of features. The results demonstrate that a more accurate determination of the phase of pressure sore generation and healing can be achieved by utilizing both imaging techniques simultaneously. Therefore, we conclude that the individual draw- backs of HFU and digital photography can be overcome by combining information from both techniques. S. Moghimi et al. / Computers in Biology and Medicine 41 (2011) 427–434434 Conflict of interest statement None declared. References [1] R. Salcido, S.B. Fisher, J.C. Donofrio, M. Bieschke, C. Knapp, R. Liang, E.K. LeGrand, J.M. Carney, An animal model and computer controlled surface pressure delivery system for the production of pressure ulcers, J. Rehabil. Res. Dev. 32 (2) (1995) 149–161. [2] D.T. Rovee, H.I. Maibach, The epidermis in wound healing, CRC Press Inc, 2003. [3] M.E. Wendelken, L. Markowitz, M. Patel, O.M. Alvarez, Objective, non- invasive wound assessment using B-mode ultrasonography, Wounds 15 (11) (2003) 351–360. [4] D. Bader, C. Bouten, D. Colin, C. Oomens, Pressure ulcer research: current and future perspectives, Springer Verlag, 2005. [5] P. Plassman, T.D. Jones, MAVIS: a non-invasive instrument to measure area and volume of wounds, Med. Eng. Phys. 20 (1998) 332–338. [6] S. Treuillet, B. Albouy, Y. Lucas, Three-dimensional assessment of skin wounds using a standard digital camera, IEEE Trans. Med. Imag 28 (5) (2009) 752–762. [7] A. Malian, A. Azizi, F.H.V. Den, M. Zolfaghari, Development of a robust photogrammetric metrology system for monitoring the healing bedsores, Photogrammetric Rec. 20 (111) (2005) 241–273. [8] G.L. Hansen, E.M. Sparrow, J.Y. Kokate, K.J. Leland, P.A. Iaizzo, Wound status evaluation using color image processing, IEEE Trans. Med. Imag. 16 (1) (1997) 78–86. [9] M. Herbin, F.X. Bon, A. Venot, F. Jeanlouis, M.L. Dubertret, G. Strauch, Assessment of healing kinetics through true color image processing, IEEE Trans. Med. Imag. 12 (1) (1993) 39–43. [10] P. Altmeyer, S. El-Gammal, K. Hoffmann, Ultrasound in Dermatology, Springer Verlag, Berlin Heidelberg, 1992. [11] F.K. Forster, J.E. Oledrud, M.A. Riederer-Henderson, A.W. Holmes, Ultrasonic assessment of skin and surgical wounds utilizing backscatter acoustic techniques to estimate attenuation, Ultrasound Med. Biol. 16 (1) (1990) 43–53. [12] M.G. Rippon, K. Springett, R. Walmsley, K. Patrick, S. Millson, Ultrasound assessment of skin and wound tissue: comparison with histology, Skin Res. Technol. 4 (1998) 147–154. [13] M.G. Rippon, K. Springett, R. Walmsley, Ultrasound evaluation of acute experimental and chronic clinical wounds, Skin Res. Technol. 5 (1999) 228–236. [14] M. Dyson, S. Moodley, L. Verjee, W. Verling, J. Weinman, P. Wilson, Wound healing assessment using 20 MHz ultrasound and photography, Skin Res. Technol. 9 (2003) 116–121. [15] Y.C. Du, C.M. Lin, Y.F. Chen, C.L. Chen, T. Chen, Implementation of a burn scar assessment system by ultrasound techniques, in: Proceedings of the 28th IEEE EMBS Annual International Conference, 2006, pp. 2328–2331. [16] P.R. Quintavalle, C.H. Lyder, P.J. Mertz, C. Phillips-Jones, M. Dyson, Use of high-resolution, high-frequency diagnostic ultrasound to investigate the pathogenesis of pressure ulcer development, Adv. Skin Wound Care 19 (9) (2006) 498–505. [17] S. Prabhakara, Acoustic Imaging of Bruises, Master Thesis, Georgia Institute of Technology, 2004. [18] S. Theodoridis, K. Koutroumbas, Pattern recognition, 2nd ed., Academic Press, San Diego, 2003. [19] F.M.J. Valckx, J.M. Thijssen, Characterization of echographic image texture by cooccurrence matrix parameters, Ultrasound Med. Biol. 23 (4) (1997) 559–571. [20] M. Vogt, H. Ermert, S.E. Gammal, K. Kasper, K. Hoffmann, P. Altmeyer, Structural analysis of the skin using high frequency, broadband ultrasound in the range from 30 to 140 MHz, in: Proceedings of the IEEE International Ultrasonics Symposium, 1998, pp.1685–1688. [21] W.C. Yeh, Y.M. Jeng, C.H. Li, P.H. Lee, P.C. Li, Liver fatty change classification using 25 MHz high frequency ultrasound, in: Proceedings of the IEEE International Ultrasonics, Ferroelectrics, and Frequency Control Joint 50th Anniversary Conference, 2004, pp. 2169–2172. [22] S. Moghimi, M.H. Miran Baygi, G. Torkaman, A. Mahloojifar, Quantitative assessment of pressure sore generation and healing through the numerical analysis of high frequency ultrasound images, J. Rehabil. Res. Dev. 74 (2) (2010) 99–108. [23] Y.V. Haeghen, J. Naeyaert, I. Lemahieu, W. Philips, An imaging system with calibrated color image acquisition for use in dermatology, IEEE Trans. Med. Imag. 19 (7) (2000) 722–730. [24] G. Hance, S. Umbaugh, R. Moss, V. Stoecker, Unsupervised color image segmentation with application to skin tumor borders, IEEE Eng. Med. Biol. Mag. 5 (1) (1996) 104–111. [25] M. Nischik, C. Forster, Analysis of skin erythema using true-color images, IEEE Trans. Med. Imag. 16 (6) (1997) 711–716. [26] B.A. Pinero, C. Serrano, J.I. Acha, Segmentation of burn images using the Lnunvn space and classification of their dept by color and texture information, SPIE 4684 (2002) 1508–1515. [27] B. Belem, Non-invasive Wound Assessment by Image Analysis. Ph.D. Thesis, University of Glamorgan, UK, 2004. [28] M. Kolesnik, A. Fexa, Multi-dimensional color histograms for segmentation of wounds in images, Lecture Notes Comput. Sci. 3656 (2005) 1014–1022. [29] H. Oduncu, A. Hoppe, M. Clark, R.J. Williams, K.J. Harding, Analysis of skin wound images using digital color image processing: a preliminary commu- nication, Int. J. Lower Extremity Wounds 3 (3) (2004) 151–156. [30] M. Galushka, H. Zheng, D. Patterson, L. Bradley, Case-based tissue classifica- tion for monitoring leg ulcer healing, in: Proceedings of the 18th IEEE Symposium on Computer-Based Medical Systems, 2005, pp. 353–358. [31] H. Zheng, L. Bradley, D. Patterson, M. Galushka, J. Winder, New protocol for leg ulcer classification from colour images, in: Proceedings of the 26th Annual International Conference of the IEEE EMBS, 2004, pp.1389–1392. [32] H. Wannous, S. Treuillet, Y. Lucas, Supervised tissue classification from color images for a complete wound assessment tool, in: Proceedings of the 29th annual international conference of the IEEE EMBS, 2007, pp. 6031–6034. [33] A. Hoppe, D. Wertheim, J. Melhuish, H. Morris, K.G. Harding, R.J. Williams, Computer assisted assessment of wound appearance using digital imaging, in: Proceedings of the 23rd annual EMBS International Conference, 2001, pp. 2595–2597. [34] C. Serrano Acha, L. Roa, Segmentation and classification of burn colour images, in: Proceedings of the 23rd annual EMBS International Conference, 2001, pp. 2692–2695. [35] T.D. Jones, P. Plassmann, An instrument to measure the dimensions of skin wounds, IEEE Trans. Biomed. Eng. 42 (5) (1995) 464–470. [36] T.D. Jones, P. Plassman, An active contour model for measuring the area of leg ulcers, IEEE Trans. Med. Imag. 19 (12) (2000) 1202–1210. [37] T. Krouskop, R. Baker, M. Wilson, A noncontact wound measurement system, J. Rehabil. Res. Dev. 39 (3) (2002) 337–346. [38] X. Liu, W. Kim, R. Scmidt, B. Drerup, J. Song, Wound measurement by curvature maps: a feasibility study, Physiol. Meas. 27 (2006) 1107–1123. [39] MAVIS II: 3-D wound instrument measurement Univ. Glamorgan, 2006 [online]. Available: /http://www.imaging.research.glam.ac.uk/projects/wm/ mavis/S. [40] F.X. Bon, E. Briand, S. Guichard, B. Couturaud, M. Revol, J.M. Servant, L. Dubertret, Quantitative and kinetic evolution of wound healing through image analysis, IEEE Trans. Med. Imag. 19 (7) (2000) 767–772. [41] R.K. Daniel, D.L. Priest, D.C. Wheatley, Etiologic factors in pressure sores: an experimental model, Arch. Phys. Med. Rehabil. 62 (1981) 492–498. [42] G. Torkaman, A.A. Sharafi, A. Fallah, H.R. Katoozian, Biomechanical and histological studies of experimental pressure sores in guinea pigs, in: Proceedings of the10th ICBME, 2000, pp. 463–469. [43] M. Herbin, A. Venot, Y. Devaux, C. Piette, Color quantitation through image processing in dermatology, IEEE Trans. Med. Imag. 9 (3) (1990) 262–269. [44] B.B. Mandelbrot, The fractal geometry of nature, W H Freeman and Co., 1982. [45] S. Peleg, J. Naor, R. Hartley, D. Avnir, Multiple resolution texture analysis and classification, IEEE TPAMI 6 (4) (1984) 518–522. [46] S. Lekshmi, K. Revathy, S.R. Prabhakaran Nayar, Galaxy classification using fractal signature, Astron. Astrophys. 405 (2003) 1163–1167. [47] S. Moghimi, G. Torkaman, M.H. Miran Baygi, A. Mahloojifar, Assessment of artificially induced pressure sores using a modified fractal analysis, J. Appl. Sci. 9 (8) (2009) 1544–1549. [48] A.S. Kumar, S.K. Basu, K.L. Majumdar, Robust classification of multispectral data using multiple neural networks and fuzzy integral, IEEE Trans. Geosci. Remote Sens. 35 (3) (1997) 787–790. [49] P.D. Gader, J.M. Keller, B.N. Nelson, Recognition technology for the detection of buried land mines, IEEE Trans. Fuzzy Syst. 9 (1) (2001) 31–43. [50] Y. Lee, D. Marshall, Curvature based normalized 3D component facial image recognition using fuzzy integral, Appl. Math. Comput. 205 (2008) 815–823. [51] L.I. Kuncheva, ‘‘Fuzzy’’ versus ‘‘nonfuzzy’’ in combining classifiers designed by boosting, IEEE Trans. Fuzzy Syst. 11 (6) (2003) 729–741. [52] L.I. Kuncheva, Combining pattern classifiers, Wiley-Interscience, 2004. [53] V. Maida, M. Ennis, C. Kuziemsky, The Toronto symptom assessment system for wounds: a new clinical and research tool, Adv. Skin Wound Care 22 (10) (2009) 468–474. http://www.imaging.research.glam.ac.uk/projects/wm/mavis/ http://www.imaging.research.glam.ac.uk/projects/wm/mavis/ Automatic evaluation of pressure sore status by combining information obtained from high-frequency ultrasound and digital... Introduction Materials and methods Animal model for pressure sore generation High frequency ultrasound imaging Digital color imaging Extracting features High-frequency ultrasound Statistical parameters Fractal signature Echogenicity Color features Classification Combining the decision of neural networks with fuzzy integral Performance evaluation Results Discussion Conclusion Conflict of interest statement References