bij_725.fm Biological Journal of the Linnean Society , 2007, 90 , 211–237. With 11 figures © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 211 Blackwell Publishing LtdOxford, UKBIJBiological Journal of the Linnean Society0024-4066© 2006 The Linnean Society of London? 2006 90? 211237 Original Article USING CAMERAS TO STUDY ANIMAL COLORATION M. STEVENS ET AL . *Corresponding author. Current address: Department of Zoology, University of Cambridge, Downing Street, Cambridge CB2 3EJ, UK. E-mail: ms726@cam.ac.uk Using digital photography to study animal coloration MARTIN STEVENS 1 *, C. ALEJANDRO PÁRRAGA 2 , INNES C. CUTHILL 1 , JULIAN C. PARTRIDGE 1 and TOM S. TROSCIANKO 2 1 School of Biological Sciences, University of Bristol, Woodland Road, Bristol BS8 1UG, UK 2 Department of Experimental Psychology, University of Bristol, Woodland Road, Bristol BS8 1TN, UK Received 19 May 2005; accepted for publication 1 March 2006 In understanding how visual signals function, quantifying the components of those patterns is vital. With the ever- increasing power and availability of digital photography, many studies are utilizing this technique to study the con- tent of animal colour signals. Digital photography has many advantages over other techniques, such as spectrometry, for measuring chromatic information, particularly in terms of the speed of data acquisition and its relatively cheap cost. Not only do digital photographs provide a method of quantifying the chromatic and achromatic content of spa- tially complex markings, but also they can be incorporated into powerful models of animal vision. Unfortunately, many studies utilizing digital photography appear to be unaware of several crucial issues involved in the acquisition of images, notably the nonlinearity of many cameras’ responses to light intensity, and biases in a camera’s processing of the images towards particular wavebands. In the present study, we set out step-by-step guidelines for the use of digital photography to obtain accurate data, either independent of any particular visual system (such as reflection values), or for particular models of nonhuman visual processing (such as that of a passerine bird). These guidelines include how to: (1) linearize the camera’s response to changes in light intensity; (2) equalize the different colour channels to obtain reflectance information; and (3) produce a mapping from camera colour space to that of another colour space (such as photon catches for the cone types of a specific animal species). © 2007 The Linnean Society of London, Biological Journal of the Linnean Society , 2007, 90 , 211–237. ADDITIONAL KEYWORDS: camera calibration – colour vision – colour measurement – digital cameras – imaging – radiance – reflection – signals. INTRODUCTION Investigations into the adaptive functions of animal coloration are widespread in behavioural and evolu- tionary biology. Probably because humans are ‘visual animals’ themselves, studies of colour dominate func- tional and evolutionary investigations of camouflage, aposematism, mimicry, and both sexual and social sig- nalling. However, with advances in our knowledge of how colour vision functions and varies across species, it becomes increasingly important to find means of quantifying the spatial and chromatic properties of visual signals as they are perceived by other animals or, at the very least, in a manner independent of human perception. This is nontrivial because colour is not a physical property, but rather a function of the nervous system of the animal perceiving the object (Newton, 1718: ‘For the rays, to speak properly, are not coloured’; Endler, 1990; Bennett, Cuthill & Norris, 1994). One way to produce an objective measure of the properties of a colour signal is to measure surface reflectance using spectrophotometry, which provides precise information on the intensity distribution of wavelengths reflected (Endler, 1990; Zuk & Decruye- naere, 1994; Cuthill et al ., 1999; Gerald et al ., 2001; Endler & Mielke, 2005). Reflectance data can also be combined with information on the illuminant and the photoreceptor sensitivities of the receiver (and, if available, neural processing) to model the colours per- ceived by nonhuman animals (Kelber, Vorobyev & Osorio, 2003; Endler & Mielke, 2005). However, con- ventional spectrometers provide only point samples, and to characterize adequately the colour of a hetero- geneous object requires multiple samples across an 212 M. STEVENS ET AL . © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 appropriately designed sampling array, such as multiple transects or prespecified regions (Cuthill et al ., 1999; Endler & Mielke, 2005). This not only has a cost in terms of sampling time, but also the informa- tion about spatial relationships between colours then needs to be reconstructed from the geometry of the sampling array (Endler, 1984) and the spatial resolu- tion is generally crude. Spectrometry also usually requires a static subject, either because of the need to sample an array or because the measuring probe often needs to be close to or touching the colour patch, a particular problem in the field or with delicate museum specimens. Focusing optics can obviate the need for contact with the animal or plant and offer a degree of ‘remote sensing’ (Marshall et al ., 2003; Sumner, Arrese & Partridge, 2005), but this approach is rare. An alternative to spectrometry is photography, which has a long history of use in studies of animal coloration (Thayer, 1896, 1909; Cott, 1940; Tinbergen, 1974; Pietrewicz & Kamil, 1979) but is becoming increasingly used because of the flexibility and appar- ent precision that digital imaging provides. Colour change in the common surgeonfish (Goda & Fujii, 1998), markings in a population of Mediterranean monk seals (Samaranch & Gonzalez, 2000), egg cryp- sis in blackbirds (Westmoreland & Kiltie, 1996), the role of ultraviolet (UV) reflective markings and sexual selection in guppies (Kodric-Brown & Johnson, 2002), and the functions of primate colour patterns (Gerald et al ., 2001) comprise a few recent examples. Digital photography bears many advantages over spectrome- try, particularly in the ability to utilize powerful and complex image processing algorithms to analyse entire spatial patterns, without the need to recon- struct topography from point samples. More obviously, photographing specimens is relatively quick, allowing rapid collection of large quantities of data, from unre- strained targets and with minimal equipment. Imag- ing programs can be used to obtain various forms of data, including colour patch size and distribution measures, diverse ‘brightness’ and colour metrics, or broadband reflection values (such as in the long-, medium-, and short wavebands). Video imaging can provide temporal information too. Digital technology also has the potential for manipulating stimuli for use in experiments, with the most impressive examples being in animations within video playback experi- ments (Künzler & Bakker, 1998; Rosenthal & Evans, 1998), although there are problems with these meth- ods that need to be understood (D’Eath, 1998; Fleish- man et al ., 1998; Cuthill et al ., 2000a; Fleishman & Endler, 2000). Digital photography is increasingly incorporated into many studies of animal coloration due to its per- ceived suitability for objectively quantifying colour and colour patterns. However, many studies appear to be unaware of the complex image processing algo- rithms incorporated into many digital cameras, and make a series of assumptions about the data acquired that are rarely met. The images recorded by a camera are not only dependent upon the characteristics of the object photographed, the ambient light, and its geom- etry, but also upon the characteristics of the camera (Barnard & Funt, 2002; Westland & Ripamonti, 2004). Therefore, the properties of colour images are device- dependent, and images of the same natural scene will vary when taken with different cameras because the spectral sensitivity of the sensors and firmware/ software in different cameras varies (Hong, Lou & Rhodes, 2001; Yin & Cooperstock, 2004). Finally, the images are frequently modified in inappropriate ways (e.g. through ‘lossy’ image compression; for a glossary of some technical terms, see Appendix 1) and ‘off-the- shelf ’ colour metrics applied without consideration of the assumptions behind them. At best, most current applications of digital photography to studies of ani- mal coloration fail to utilize the full potential of the technology; more commonly, they yield data that are qualitative at best and uninterpretable at worst. This present study aims to provide an accessible guide to addressing these problems. We assume the reader has two possible goals: (1) to reconstruct the reflectance spectrum of the object (maybe just in broad terms such as the relative amounts of long-, medium- and short- wave light; although we will also consider something more ambitious) or (2) to model the object’s colour as perceived by a nonhuman animal. Because we are con- sidering applications of the accessible and affordable technology of conventional digital colour cameras, we are primarily focused on the human-visible spectrum of c . 400–700 nm, but we also consider UV imaging and combining this information with that from a stan- dard camera. Our examples come from an investiga- tion of colour patterns on lepidopteran wings, and how these might be viewed by avian predators. This is a challenging problem (birds are potentially tetra- chomatic and have an UV-sensitive cone type; Cuthill et al ., 2000b), yet it is both tractable and informative, because much of the avian colour world overlaps with ours and birds are the focal organisms in many studies of animal coloration (whether their sexual signals, or the defensive coloration of their prey). CONCEPTUAL BACKGROUND The light coming from a point on an object, its radi- ance spectrum, is a continuous distribution of differ- ent intensities at different wavelengths. No animal eye, or camera, quantifies the entire radiance spec- trum at a given point, but instead estimates the inten- sity of light in a (very) few broad wavebands. Humans USING CAMERAS TO STUDY ANIMAL COLORATION 213 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 and many other primates use just three samples, cor- responding to the longwave (LW or ‘red’), mediumwave (MW or ‘green’) and shortwave (SW or ‘blue’) cone types in the retina (Fig. 1A); bees and most other insects also use three samples, but in the UV, SW, and MW wavebands; birds and some reptiles, fish and but- terflies use four samples (typically UV, SW, MW, and LW; Fig. 1B). A corollary of colour vision based on such few, broadband, spectral samples is that the colour appearance of an object can be matched, perfectly, by an appropriate mixture of narrow waveband lights (‘primary colours’) that differentially stimulate the photoreceptors. Three primary colours [e.g. red, green, and blue (RGB) in video display monitors] are required for colour matching by normally sighted humans. All that is required is that the mix of primary colours stimulates the photoreceptors in the same way as the radiance spectrum of the real object (without actually having to mimic the radiance spectrum per se ). The additive mixing of three primaries is the basis of all video and cinematographic colour reproduction, and colour specification in terms of the amounts of these primaries, the so-called tristimulus values, lies at the base of most human colour science (Wyszecki & Stiles, 1982; Mollon, 1999; Westland & Ripamonti, 2004). That said, RGB values from a camera are not standardized tristimulus values and so, although they are easily obtained with packages such as Paintshop Pro (Corel Corporation; formerly Jasc Software) or Photoshop (Adobe Systems Inc.), simply knowing the RGB values for a point in a photograph is not suffi- cient to specify the colour of the corresponding point in the real object. An over-riding principle to consider when using dig- ital cameras for scientific purposes is that most digital cameras are designed to produce images that look good, not to record reality. So, just as Kodachrome and Fujichrome produce differing colour tones in ‘ana- logue’ film-based cameras, each film type having its own advocates for preferred colour rendition, the same is true of digital cameras. The values of R, G and B that are output from a camera need not be linearly related to the light intensity in these three wave- bands. In technical and high-specification cameras they are, and the sensors themselves (the Charge Cou- pled Devices; CCDs) generally have linear outputs. By contrast, most cameras designed for non-analytical use have nonlinear responses (Cardei, Funt & Bar- nard, 1999; Lauziére, Gingras & Ferrie, 1999; Cardei & Funt, 2000; Barnard & Funt, 2002; Martinez-Verdú, Pujol & Capilla, 2002; Westland & Ripamonti, 2004). This is a function of post-CCD processing to enhance image quality, given the likely cross-section of print- ers, monitors, and televisions that will be used to view the photographs (these devices themselves having diverse, designed-in, nonlinearities; Westland & Ripa- monti, 2004). Most digital images will display well on most monitors because the two nonlinearities approx- imately cancel each other out. The first step in anal- ysing digital images is therefore to linearize the RGB values. Even with RGB values that relate linearly to R, G, and B light intensity, there is no single standard for what constitutes ‘red’, ‘green’, and ‘blue’ wavebands; nor need there be because different triplets of primary colours can (and, historically, have been) used in experiments to determine which ratios of primaries match a given human-perceptible colour (Mollon, 1999; Westland & Ripamonti, 2004). The spectral sen- sitivities of the sensors in a digital camera need not, and usually do not, match those of human visual pig- ments, as was the case with the Nikon 5700 Coolpix camera primarily used in this study (Fig. 1C). The RGB values in images from a given camera are specific to that camera. Indeed, the values are not necessarily even specific to a particular make and model, but rather specific to an individual camera, because of inherent variability in CCDs at the manufacturing stage (Fig. 2). One can, however, map the camera RGB values to a camera-independent, human colour space (and, under some circumstances, that of another ani- mal) given the appropriate mapping information. Therefore, the mapping, through mathematical transformation, of the camera-specific RGB values to camera-independent RGB (or other tristimulus repre- sentation) is the second crucial step in obtaining use- ful data from a digital image. Furthermore, and often as part of the transformation step, it will usually be desirable to ‘remove’ variation due to the illuminating light. The camera measures R, G, and B radiance, which is the product of the reflectance of the object and the three-dimensional radiance spectrum illuminat- ing the object (often approximated by the irradiance spectrum of the illuminant). The situation is rather more complex underwater, where the medium itself alters the radiance spectrum (Lythgoe, 1979) by wave- length-dependent attenuation. However, an object does not change colour (much) when viewed under a blue sky, grey cloud, or in forest shade, even though the radiance spectra coming from it changes consider- ably. This phenomenon of ‘colour constancy’, whereby the visual system is largely able to discount changes in the illuminant and recover an object’s reflectance spectrum, is still not fully understood (Hurlbert, 1999), but equivalent steps must be taken with digital images if it is object properties that are of interest rather than the radiance itself. Many digital cameras allow approximations of colour constancy (white-point balancing) at the point of image acquisition; for exam- ple by selecting illuminant conditions such as sky- light, cloudy, and tungsten. However, these settings are an approximation and, in practice, their effects 214 M. STEVENS ET AL . © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 Figure 1. A, normalized absorptance (equal areas under curves) of human cones. Absorbance ( N ) data from Dartnall, Bowmaker & Mollon (1983) converted to absorptance ( P ) by the equation P = 1 − 10 − 1NLS , where L is the length of the cone (20 µ m from Hendrickson and Drucker, 1992), and S is specific absorbance, 0.015/ µ m − 1 . B, normalized absorptance (equal areas under curves) of starling cones to different wavelengths of light. From Hart, Partridge & Cuthill (1998). C, normalized spectral sensitivity (equal areas under curves) of the sensors in the Nikon 5700 Coolpix camera used in the present study. SW, shortwave; MW, mediumwave; LW, longwave; UV, ultraviolet. 350 400 450 500 550 Wavelength (nm) Wavelength (nm) Wavelength (nm) 600 650 700 750 0 0.002 0.004 0.006 0.008 0.01 N o rm a li s e d A b s o rp ta n c e N o rm a li s e d A b s o rp ta n c e N o rm a li s e d S p e c tr a l S e n s it iv it y 0.012 0.014 SW UV SW SW MW MW MW LW LW LW Double A B C 0.016 250 300 350 400 450 500 550 600 650 700 750 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.02 300 350 400 450 500 550 600 650 700 750 0.00 0.05 0.10 0.15 0.20 USING CAMERAS TO STUDY ANIMAL COLORATION 215 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 need to be eliminated because the effect of the illumi- nant itself needs to be ‘removed’. Removing the effect of the light source characteristics can thus be coupled to eliminating any biases inherent in the camera’s image processing (such as an over-representation of some wavelengths/bands to modify the appearance of the photograph; Cardei et al ., 1999; Finlayson & Tian, 1999; Lauziére et al ., 1999; Martinez-Verdú et al ., 2002). This is essential if accurate data representing the inherent spectral reflection characteristics of an animal’s colour are to be obtained. Many studies have used cameras to investigate ani- mal colour patterns, but most fail to test their digital cameras to determine if all of the above assumptions are met and/or if the analysis yields reliable data (Frischknecht, 1993; Villafuerte & Negro, 1998; Wede- kind et al ., 1998; Gerald et al ., 2001; Kodric-Brown & Johnson, 2002; Bortolotti, Fernie & Smits, 2003; Cooper & Hosey, 2003); for a rare exception, see Losey (2003). We approach these problems in the sequence that a scientist would have to address them if interested in applying digital photography to a research question about biological coloration. This study focuses on obtaining data corresponding to inherent animal col- oration, such as reflection data, and of obtaining data relevant to a given receiver’s visual system. Either of these data types may be more suitable depending upon the research question. Reflection data does not assume specific environmental conditions or a parti- cular visual system viewing the object, and so data can be compared across different specimens easily, even when measured in different places. The lack of assumptions about the receiver’s visual system, such as photoreceptor types, distributions, abundances, sensitivities, opponency mechanisms, and so on, means the data ‘stand alone’ and can be analysed as an inherent property of the animal or an object prop- agating the signal. This is useful if a researcher sim- ply wishes to know if, for example, individual ‘a’ has more longwave reflection than individual ‘b’. Remov- ing illumination information coincides with evidence that many animals possess colour constancy. Con- versely, simply taking reflection into account could be misleading if what one really wants to know is how a signal is viewed by a receiver. For example, if an indi- vidual possesses a marking high in reflection of a spe- cific waveband, but the environment lacks light in that part of the spectrum or the receiver is insensitive to that waveband, the region of high spectral reflection will be unimportant as a signal. Therefore, it is often necessary to include the ambient light characteristics and, if known, information concerning the receiver’s visual system. However, calculated differences in photon catches of various photoreceptor types (for example) between the different conditions do not nec- essarily lead to differences in perception of the signal, if colour constancy mechanisms exist. Furthermore, if reflection information is obtained, this may be con- verted into a visual system specific measure, either by mapping techniques, as discussed here, or by calcula- tions with illuminant spectra and cone sensitivities. Therefore, although the present study deals with both types of measurements, we focus more on the task of Figure 2. A plot of spectral sensitivity of two Nikon 5700 cameras for the longwave (LW), mediumwave (MW), and shortwave (SW) channels. Even though the cameras are the same make and model, and were purchased simultaneously, there are some (albeit relatively small) differences in spectral sensitivity. S p e c tr a l S e n s it iv it y 400 450 500 550 Wavelength (nm) LW_1 LW_2 MW_1 MW_2 SW_1 SW_2 600 650 700 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 216 M. STEVENS ET AL . © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 obtaining information about inherent properties of animal coloration. We assume that images are stored to a precision of 8 bits in each colour channel, such that intensity is on a scale of 0–255; such ‘true colour’ images (2 8 cubed, or > 16 million colours) are the current norm. Although some studies have used conventional (nondigital) cam- eras to study animal coloration, we would advise against doing so. Although conventional film can be linearized, the corrections required from one batch of film to the next are likely to differ, even from the same manufacturer. Film processing techniques, such as scanning to digitize the images, are also likely to intro- duce considerable spatial and chromatic artefacts, which need to be removed/prevented before analysis. CHOOSING A CAMERA We have mentioned the nonlinear response of many digital cameras and although we show (below) how linearization can be accomplished, nonlinearity is bet- ter avoided. Other than this, essential features to look for are (Table 1): 1. The ability to disable automatic ‘white-point balanc- ing’. This is a software feature built into most cameras to achieve a more natural colour balance under differ- ent lighting conditions. The brightest pixel in any image is set to 255 for R, G, and B (i.e. assumed to be white). Obviously, for technical applications where the object to be photographed has no white regions, this would produce data in which the RGB values are inap- propriately weighted. 2. A high resolution. The resolution of a digital image is generally limited by the sensing array, rather than the modulation transfer function of the lens. Essen- tially, the number of pixels the array contains deter- mines resolution, with higher resolution cameras able to resolve smaller colour patches allowing more detail to be measured, or the same amount of relative detail measured from a further distance from the subject. Also important is the Nyquist frequency (half that of the highest frequency spatial waveform), which is the highest spatial frequency where the camera can still accurately record image spatial detail; spatial pattern- ing above this frequency results in aliasing, which could be a problem for patterns with a very high level of spatial detail (Efford, 2000). There is no set rule as to what a minimum level of pixels in an image should be; if it is possible to work in close proximity to the object, then even a 0.25-megapixel image may be suf- ficient. The problem is to avoid Nyquist limit problems, where the pixels need to be less than half the size of the smallest detail in the image that you are interested in. Each pixel on a digital camera sensor contains a light sensitive photodiode, measuring the intensity of light over a broadband spectrum. A colour filter array is positioned on top of the sensor to filter the red, green, and blue components of light, leaving each pixel sen- sitive to one waveband of light alone. Commonly, there is a mosaic of pixels, with twice as many green sensi- tive ones as red or blue. The two missing colour values for each individual pixel are estimated based on the values of neighbouring pixels, via so-called demosaic- ing algorithms, including Bayer interpolation. It is not just the number of pixels a camera produces (its geo- metrical accuracy) that matters, but also the quality of each pixel. Some cameras are becoming available that have ‘foveon sensors’, with three photodetectors per pixel, and can thus create increased colour accu- racy by avoiding artefacts resulting from interpolation algorithms. However, due to the power of the latest interpolation software, colour artefacts are usually minor, especially as the number of pixels increases, and foveon sensors may have relatively low light sen- sitivity. Higher quality sensors have a greater dynamic range, which can be passed on to the images, and some cameras are now being produced with two photodiodes per pixel: one of which is highly sensitive to low light levels, the other of which is less sensitive and is used to estimate higher light levels without becoming sat- urated. A distinction should also be made between the Table 1. Desirable characteristics when purchasing a digital camera for research Attribute Relative importance High resolution (e.g. minimum of 5 megapixels) Medium (depends upon the complexity/size of the object photographed) Manual white balance control High Macro lens Medium Ability to save TIFF/RAW file formats High Manual exposure control High Remote shutter release cable capability Low Ability to change metering method Medium Optical zoom Medium USING CAMERAS TO STUDY ANIMAL COLORATION 217 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 number of overall pixels and the number of effective pixels. A conventional 5 megapixel camera actually may output 2560 × 1920 pixel images (4915 200 pixels) because some of the pixels in the camera are used for various measurements in image processing (e.g. dark current measurements). 3. The ability to store images as uncompressed TIFF (Tagged Image File Format) or RAW files. Some mid- range cameras allow storage as RAW files, others do not but often allow images to be saved as TIFF files. This is something to determine before purchasing a camera. Other file types, in particular JPEGs (Joint Photographic Experts Group), are unsuitable because information is lost in the compression process. JPEG compression is of the ‘lossy’ type, which changes the data coming from the CCD array, and where the lost information cannot be recovered. This is often unde- tectable to the human eye, but introduces both spatial and chromatic artefacts in the underlying image data, particularly if the level of compression is high (for two simple illustrations, see Figs 3, 4). JPEGs compress both the colour and spatial information, with the spa- tial information sorted into fine and coarse detail. Fine detail is discarded first because this is what we are Figure 3. Four images of the hind left spot on the emperor moth Saturnia pavonia illustrating the effects of compression on image quality. A, an uncompressed TIFF image of the original photograph. B, a JPEG image with minimal compression (10%). C, a JPEG image with intermediate compression (50%), which still appears to maintain the original structure of the image, but careful examination of the image’s spatiochromatic content shows inconsistencies with the original TIFF file. D, a JPEG image with maximal compression (90%) showing severe spatial and chromatic disruption. A B C D 218 M. STEVENS ET AL . © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 less sensitive to. For example, Gerald et al . (2001) used digital images to investigate the scrota of adult vervet monkeys Cercopithecus aethiops sabaeus . They saved the images as JPEG files but, because the level of compression of the files is not stated, it is impossible to assess the degree of error introduced. Camera man- uals may state the level of compression used on differ- ent settings, and image software should also state the level of compression used when saving JPEG files. However, even if the level of compression is known, the introduction of artefacts will be unpredictable and so JPEG files should be avoided. Lossy compression is different from some other types of compression, such as those involved with ‘zipping’ file types, where all the compressed information can be recovered. Uncom- pressed TIFF files are loss-less, but TIFF files can be compressed in either lossy or loss-less ways, and, like JPEGs, TIFFs can be modified before being saved in other ways if the necessary camera functions are not turned off (such as white-point balancing). For most cameras, a given pixel on a CCD array has only one sensor type (R, G, or B), and interpolation is required to estimate the two unknown colour values of a given pixel. Both JPEGs and TIFF files undergo interpola- tion at the stage of image capture by the camera’s internal firmware, which cannot be turned off, and the method is usually opaque to the user. Some cameras have the capacity to store RAW images. RAW files are those that are the direct product of the CCD array, and, unlike TIFFs or JPEGs which are nearly always 8-bit, RAW files are usually 12- or 16-bit. This means they can display a wider variety of colours and are generally linear because most CCDs are linear, and undergo none of the processing potentially affecting other file types. The RAW files from the camera in our study occupy approximately half of the memory of an uncompressed TIFF file because even though the TIFF file only retains 8-bits of information, it occupies twice the storage space because it has three 8-bit colour channels, as opposed to one 12-bit RAW channel per CCD pixel. However, before being useable as an image, RAW files must also go through interpolation steps in the computer software into which the files are read. Thumbnails of unprocessed RAW files in RGB format can be read into some software, but these are rela- tively useless, being only 160 × 120 pixels in resolu- tion, compared to 2560 × 1920 pixels for the processed images. The conversion to another file type can pro- ceed with no modification, just as would be the case if taking photos directly as uncompressed TIFF images. One problem with RAW files is that they can differ between manufacturers and even between camera Figure 4. Grey values measured when plotting a transect across a grey scale step image with increasing values from left to right. Grey values start at 0 on the left of the series of steps and increase in steps of 25 to reach values of 250 on the right. Plotted on the graph are the values measured for images of the steps as an uncompressed TIFF file, and JPEGs with ‘minimum’ (10%), ‘intermediate’ (50%), and ‘maximum’ (90%) levels of compression. Values of 30, 60, and 90 have been added to the JPEG files with minimum, intermediate and maximum levels of compression to separate the lines vertically. Note that, as the level of compression increases, the data measured are more severely disrupted, particularly at the boundary between changes in intensity. In the case of complex patterns, the disruption to the image structure means that measurements at any point in the image will be error prone. 0 25 50 75 100 Pixels 125 150 175 0 50 100 150 200 G re y V a lu e 250 300 350 400 Step Image Maximum Intermediate Minimum Tiff USING CAMERAS TO STUDY ANIMAL COLORATION 219 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 models, and so special software and/or ‘plug-ins’ may be needed, or the software provided by the manufac- turer must be used, to convert the images to other file formats. Unfortunately, the interpolation process is rarely revealed by the manufacturer, and may intro- duce nonlinearities into the file. It is possible to write custom programmes to read RAW files into software programmes and this has the advantage that the user can then either use the RAW data directly or decide exactly what method should be used to interpolate the RAW file into a TIFF file. Once our RAW files had been processed by software supplied by the manufacturer, they had almost identical properties to the uncom- pressed TIFF files (the introduction of nonlinearities could be due to the software processing or a nonlinear CCD). Some imaging software should allow the RAW files to be processed into TIFFs without introducing nonlinearities. RAW files can also be converted into 16-bit TIFF files, which show higher accuracy than 8- bit TIFFs and may highlight extra detail. These 16-bit file types occupy approximately 30 Mb, so consider- able storage space is needed to keep a large number of these files. However, relatively more unprocessed RAW files can be stored than TIFFs on a memory card. 4. The capacity for manual exposure control or, at the very least, aperture priority exposure. The calibration curve may vary with different aperture settings and focus distances so, to avoid the need for a large num- ber of separate calibration estimates, it is more con- venient to fix the aperture at which photographs are taken and work at constrained distances. If the aper- ture value is increased, more light from the edge of the lens is allowed through, and these rays usually do not converge on the same point as those rays coming through the centre of the lens (spherical aberration). This is especially true for colours near the edges of the human visible spectrum. By keeping the aperture con- stant and as small as possible (large F-numbers), this problem is unlikely to be significant. 5. The ability to take a remote shutter release cable (manual or electronic) to facilitate photography at long integration times (slow shutter speeds) when light levels are low. 6. Known metering characteristics. Many cameras have multiple options for light metering, such that the exposure is set dependent upon average intensity across the entire field imaged, only the intensity at the central spot, or one or more weighted intermedi- ates. Knowing which area of the field in view deter- mines exposure facilitates image composition. 7. Optical zoom can be useful, particularly if the level of enlargement can be fixed manually, so it can be repro- duced exactly, if needed, each time the camera is turned on. Digital zoom is of no value because it is merely equivalent to postimage-capture enlargement and so does not change the data content of the area of interest. 8. Good quality optics. One problem with lenses is chromatic aberration, in which light of different wave- lengths is brought to a focus in a different focal plane, thus blurring some colours in the image. This can be caused by the camera lens not focusing different wave- lengths of light onto the same plane (longitudinal chro- matic aberration), or by the lens magnifying different wavelengths differently (lateral chromatic aberration). Párraga, Troscianko & Tolhurst (2002) tested camera lenses of the type in our Nikon camera, by taking images in different parts of the spectrum through nar- rowband spectral filters and verified that the optimal focus settings did not vary significantly, meaning that the lenses did not suffer from this defect. Narrow bandpass filters selectively filter light of specific nar- row wavebands (e.g. from 400 to 410 nm). Using a set of these filters enables images to be obtained where the only light being captured is in a specific waveband. Other lenses may not be as good, especially if they have a bigger optical zoom range. Therefore, aside from the requirement to produce images free from problems such as spherical aberration, the most important issue is to minimize chromatic aberration. As with Párraga et al . (2002), a good test for this is to take images of a page of text under white light through narrowband red and blue filters without changing the focus (this requires manual focus). If there is no chromatic aber- ration, then both images should be equally sharp. A more formal test is to measure the Fourier spectrum of the two images; if there is a good correction for chromatic aberration the two spectra should be the same. Furthermore, Hong et al . (2001) noted that, in some camera lenses, light is not uniformly transmitted across its area, with the centre of the lens transmitting more light. This would result in the pixels in the centre of the image being over-represented in terms of inten- sity. This potential problem should be tested for. Losey (2003) also found that the edges of images were slightly darker. In some situations, a good macro lens is also highly desirable because this allows close up images of complex patterns to be obtained. Without a macro lens, it may not be possible to move the camera close enough to resolve complex patterns. Some cam- eras even come with a ‘super’ macro lens, such as the Fujifilm FinePix S7000, which allows photographs to be taken up to 1 cm from the object. 9. The capacity to take memory cards of high capacity. TIFF files are very large ( c . 15 Mb for an image 2560 by 1920 pixels), so that a 512 Mb card that can store over 200 medium-compression JPEGs will only store 34 TIFFs. IMAGE COLOUR VALUES The colour values to be calculated and used in any analysis are stored as RGB values in TIFF files auto- 220 M. STEVENS ET AL . © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 matically when a camera saves an image or when a file is converted into a TIFF image from its RAW file format and, if 8-bit, this is on a scale of 0–255. The camera or computer conversion software may have the option to save the image as either 8-bit or 16-bit, but 8-bit is currently more standard. The steps that follow to calculate values corresponding to, for exam- ple, reflection or photon catches are spelt out below. If adjusting an image with a standard or set of stan- dards to recover reflectance, then the standards should have a flat reflectance spectrum (i.e. R = G = B); therefore, the image values are adjusted so that R = G = B in the linearized picture. This will give an image in which the pixels have the correct rel- ative spectral reflectance. At this point, a crucial issue to emphasize is that many image software pro- grammes offer the option to convert values into other colour spaces, such as HSB (three images correspond- ing to hue, saturation, and brightness). Conversions such as HSB should be avoided and we strongly advise against this type of conversion. HSB is a human-vision-specific colour space, and even in terms of human vision, it is unlikely to be accurate; a more widely used and well tested colour space for humans is the Commission Internationale de l’Éclairage (CIE) Laboratory colour space, which may in some cases be appropriate. There are numerous pitfalls with using methodological techniques based on human vision to describe animal colours (Bennett et al ., 1994; Stevens & Cuthill, 2005). SOFTWARE One of the biggest advantages of using images to anal- yse coloration is the existence of a huge number of flexible and powerful software programmes, coupled with the option to write custom programmes in a variety of programming languages. Some of the pro- grammes available to deal with image processing are standard and quite affordable, such as Paintshop Pro or Photoshop, which can be used for a range of simple tasks. However, there are a range of other options available, including the popular freeware programmes such as the open-source image editor GIMP and the Java-based (Sun Microsystems, Inc.; Efford, 2000) imaging programme ‘Image J’ (Rasband, 1997–2006; Abràmoff, Magalhäes & Ram, 2004), with its huge variety of available ‘plugins’, written by various people for a range of tasks. Image J also permits custom pro- grammes written in the language Java to accompany it. For example, a plugin that we used, called ‘radial profile’, is ideal for analysing lepidopteran eyespots, and other circular features. This works by calculating the normalized intensities of concentric circles, start- ing at a central point, moving out along the radius. Figure 5 gives an example of this plug-in, as used to analyse an eyespot of the ringlet butterfly Aphantopus hyperantus . The programme MATLAB (The Mathworks Inc.) is also an extremely useful package for writing calibra- tions and designing sophisticated computational mod- els of vision. This is a relatively easy programming language to learn, is excellent for writing custom and powerful programmes, and, due to its matrix mani- pulation capabilities, is excellent for dealing with images (digital images are simply matrices of num- bers). MATLAB can also be bought with a range of ‘toolboxes’ that have numerous functions already writ- ten for various tasks, including image processing, sta- tistics, and wavelet transformations. MATLAB has available an Image Processing Toolbox with a range of useful functions (Hanselman & Littlefield, 2001; Hunt et al ., 2003; Gonzalez, Woods & Eddins, 2004; West- land & Ripamonti, 2004). HOW FREQUENTLY SHOULD CALIBRATIONS BE UNDERTAKEN? The frequency with which calibrations should be undertaken depends upon the specific calibration required. For example, determining the spectral sen- sitivity of a camera’s sensors need only be performed once because this should not change with time as long as the lens on the camera is not changed, in which case recalibration may be needed. Additionally, the calcu- lation of the camera’s response to changing light levels and the required linearization need only be performed once because this too does not change with time. How- ever, if calculating reflection, the calibration needs to be performed for each session/light setup because the light setup changes the ratio between the LW, MW, and SW sensors. CALIBRATING A DIGITAL CAMERA There are several steps that should be followed when wishing to obtain values of either reflection or data corresponding to an animal’s visual system. To obtain values of reflection: 1. Obtain images of a set of reflectance standards used to fit a calibration curve. 2. Determine a calibration curve for the camera’s response to changes in light intensity in terms of RGB values. 3. Derive a linearization equation, if needed, to lin- earize the response of the camera to changes in light intensity, based on the parameters deter- mined from step 2. 4. Determine the ratio between the camera’s response in the R, G, and B channels, with respect to the reflectance standards, and equalize the response of USING CAMERAS TO STUDY ANIMAL COLORATION 221 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 Figure 5. Results from a radial profile analysis performed upon one eyespot of the ringlet butterfly Aphantopus hyper- antus , illustrating the high percentage reflectance values obtained for the centre of the spot and the ‘golden’ ring further from the centre, particularly in the red and green channels, and the lack of an eyespot in the ultraviolet (UV). 0 2 4 6 8 Distance from Spot Centre (Pixels) 10 12 14 16 0 5 10 15 20 25 % R e fl e c ti o n 30 R R G B UV G B UV 35 40 45 222 M. STEVENS ET AL . © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 the different colour channels to remove the effects of the illuminating light and any biases inherent in the camera’s processing. If data corresponding to an animal’s visual system is required (such as relative photon catches): 1. Obtain photographs of reflectance standards through a set of narrow band-pass filters, at the same time as measuring the radiance with a spec- trophotometer. 2. Determine the linearity of the camera’s response to changing light levels and, if necessary, derive a linearization. Furthermore, using radiance data and the photographs through the band-pass filters, determine the spectral sensitivity of the camera’s different sensor types. 3. Using data on the spectral sensitivity of the cam- era’s sensors, and the sensitivity of the animal’s sensors to be modelled, produce a mapping based on the response to many different radiance spectra between the two different colour spaces. These different steps are discussed in detail below. LINEARIZATION If a set of grey reflectance standards is photographed and then the measured RGB values are plotted against the nominal reflectance value, the naïve expectation would be of a linear relationship (Lauziére et al., 1999). One might also expect the values obtained for each of the three colour channels to be the same for each standard because greys fall on the ach- romatic locus of R = G = B (Kelber et al., 2003). How- ever, as mentioned previously, many cameras do not fulfil such expectations, and they did not for the Nikon 5700 Coolpix camera that we used in our study (Fig. 6; see Appendix 2). A different nonlinear relationship between grey value and nominal reflection for each colour channel requires that the linearizing transfor- mation must be estimated separately for each chan- nel. Also, it means that an image of a single grey reflection standard is insufficient for camera calibra- tion; instead a full calibration experiment must be performed. We used a modification of the linearization protocols developed by Párraga (2003) and Westland & Ripam- onti (2004). The first step is to photograph a range of standard greyscales of known reflectance value. West- land & Ripamonti (2004) used the greyscale of the Macbeth ColorChecker chart (Macbeth, Munsell Color Laboratory). In the present study, because we required reflection standards suitable for UV photo- graphy (see below), we used a set of Spectralon diffuse reflectance standards (Labsphere Inc.). These stan- dards, made of a Teflon microfoam, reflect light of wavelengths between 300 nm and 800 nm (and beyond) approximately equally, and are one of the most highly Lambertian substances available over this spectral range. The standards had nominal per- centage reflection values of 2%, 5%, 10%, 20%, 40%, Figure 6. The relationship between the grey scale value measured for a set of seven Spectralon refl ectance standards from raw digital TIFF file images and the nominal reflection value, showing a curved relationship for the R, G, and B data. The ‘required’ line illustrates values that should be measured if the camera’s response was linear and the three channels equally stimulated. LW, longwave; SW, shortwave; MW, mediumwave. 0 10 20 30 40 % Reflection 50 60 70 80 0 50 100 150 LW MW SW Required 200 G re y V a lu e 250 USING CAMERAS TO STUDY ANIMAL COLORATION 223 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 50%, and 75%. If the object of the study, as in Westland & Ripamonti (2004: chapter 10) and the present study, is to recover reflectance data from the images, then the nature of the illuminant, as long as it is stable over time (Angelopoulou, 2000) and of adequate intensity in all wavebands, is irrelevant. We used a 150-W Xenon arc lamp (Light Support), which was allowed to warm up and stabilize for 1 h before the calibration exercise, and then tested for stability before and after the calibration. In Párraga (2003), the goal was to recover spectral radiance; thus, at the same time as photographing the standards, the radiance of each greyscale patch was measured using a spot-imaging telespectroradiometer (TopCon Model SR1, Calibrated by the National Physical Laboratory). After that, each sensor’s grey level output was plotted against a mea- sure of the total spectral radiance that stimulated it, at various shutter speeds. Because radiance, unlike reflectance, varies with the illuminant, Párraga (2003) repeated the calibration process under a variety of lighting conditions. If recovering radiance is the objec- tive, it is important to determine the calibration curve appropriate to the conditions under which the research photographs will be taken. Objects will give problems of metamerism if their reflectance spectra are ‘spiky’ or, at least, very uneven. It is therefore important to check the linearization calibration works for objects from the same class as those being measured. The next step is to determine the function relating the intensity values (0–255) for each of the RGB sen- sors to true reflection, or radiance, as appropriate, as measured spectrometrically. Many studies describe power functions of the same family as those relating intensity to voltage in cathode ray tube monitors; these are so-called gamma functions of the type: Output = constant × (inputγ). For this reason, the lin- earization process is sometimes referred to as ‘gamma correction’. The term gamma function means different things in film photography, digital photography, and algebraic mathematics, and so is a potentially confus- ing term that is best avoided. Because the response of the camera’s sensors is likely to be camera specific, we recommend determining the curve that best fits the data. Although many curves will no doubt fit the data very closely (e.g. a Modified Hoerl and Weibull model, amongst others, fitted our reflection data very well), it is preferable to choose a function that is the same for each of the camera’s three sensors; this makes produc- ing the calibrations much easier because the calibra- tion equation will be of the same form for each channel, with only the parameters varying. If there are several curves that all fit the data well, then choos- ing the simplest equation and with the lowest number of parameters makes calibration much easier. The equation of the curve to produce a calibration should be reversible, favouring a simpler model, because try- ing to revert a high-order polynomial, for example, can be very complicated. We found that the function below fitted our camera well: QS = a × bP (1) where Qs is the photon catch of a given sensor S (R, G, or B), P the value of the pixel of sensor S, and a and b are constants. Qs is the product of the measured radi- ance spectrum and the sensor’s spectral sensitivity, but it is rare for manufacturers to publish such data. Westland & Ripamonti (2004) mention that luminance is sometimes used as an approximation, on the assumption that for a grey standard the radiance in all three channels should be the same. However, this assumes a spectrally flat light source, which no light source ever really is. Therefore, the spectral sensitiv- ity needs to be measured directly, by measuring the camera RGB values when imaging greyscales illumi- nated through narrow band-pass filters. In this way, one can construct spectral sensitivity curves analo- gous to the spectral sensitivity curves of animal pho- toreceptors (Fig. 1). Párraga (2003) provides technical details on how to achieve this. In Párraga’s (2003) linearization exercise, the value of b in the equation above was found to be similar for all conditions tested (sunny, cloudy, and incandescent artificial light) and all sensors. Thus, the value of a defined each curve. Because the linearized values for R, G, and B represented radiance in the wavebands specific to each sensor, the photograph’s exposure was also taken into account in the calibration process (a longer exposure time representing lower radiance). Therefore, the following three equations were derived to linearize and scale each RGB value to radiance measures, where QS is the radiance measured by sen- sor S, b and the ai are the coefficients estimated by ordinary least-squares regression of log-transformed values, c is a value to account for inherent dark cur- rent (see below) in the camera, and t is the integration time the photograph was taken on [1/shutter speed]: QR = a1(bR − c1)/t (2) QG = a2(bG − c2)/t (3) QB = a3(bB − c3)/t (4) If the object of the research is to obtain reflection rather than radiance measures, then t can be ignored and functions such as eqn. 1 could be used, provided that t is constant and measurements of known reflec- tion standards are also made. Because the reflection values of greyscales are, by definition, equal in all wavebands, sensor spectral sensitivity does not in principle need to be known for linearization in relation to reflection, although in practice, one would want to know the spectral sensitivity curves for the camera’s 224 M. STEVENS ET AL. © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 sensors for the data to be readily interpreted (in terms of the sensitivity corresponding to each sensor). In the case of either radiance or reflection calibration, one should check that it is valid to force the calibration curve through the origin. All digital imaging sensors have associated with them an inherent ‘dark current’ (due to thermal noise in the sensors) (Efford, 2000; Stokman, Gevers & Koenderink, 2000; Barnard & Funt, 2002; Martinez-Verdú et al., 2002), so that a set of images with the lens cap on may not produce mea- surements of zero. As with spectrometry, the dark cur- rent can be estimated by taking images at the same exposure settings as calibration photos, and using the pixel values as an offset for the curve. One should also check whether increasing the integration time, or temperature changes within the range at which the camera will be used for data collection, alters these background dark current values. Figure 7 provides an example of linearization per- formed on the RGB values from photographs of reflec- tance standards (Fig. 6). This shows that, generally, the linearization was successful. However, one should note that the values of the reflectance standards with low nominal reflection values are not accurate because these standards were partially underexposed (i.e. there are many pixels with values equal or close to the dark current values) and, for this specific set of images of standards, some standards are slightly closer to, or further away from, the light source. This means that the calibration line will not be perfectly straight. Because the relatively darker areas (low pixel values) of images are often inaccurate in the measurements they yield, these values may be nonlinear (Barnard & Funt, 2002). However, the measurement error is rela- tively small. RGB EQUALIZATION If the goal is to derive reflection data from the photographs, then greys should, by definition, have equal reflection in all three colour channels. So, if R ≠ G ≠ B in the calibration images, the next step is to equalize the three channels with respect to the images of the reflection standards, and then scale the values between 0–255. This, in theory, should be relatively simple: it is a matter of producing a ratio between the three channels and then scaling them, usually with respect to the green channel as a refer- ence point, before multiplying the entire image by 2.55 to set the values on a scale between 0 and 255. So, for our data: R′ = (RxR)2.55 (5) G′ = (GxG)2.55 (6) B′ = (BxB)2.55 (7) where xi is the scaling value for each channel, and R, G, and B are the linearized image values for each channel, respectively. The equalized values were then tested for accuracy using a different set of calibration images. Figure 8 shows the result. The three channels closely match the required calibration line. Note that there is no need for 255 to represent 100% reflection; indeed, to obtain maximum resolution in colour dis- Figure 7. The relationship between measured greyscale value and nominal reflection value for the seven reflectance standards, showing the linearization of the gamma curves. LW, longwave; SW, shortwave; MW, mediumwave. 0 10 20 30 40 % Reflection 50 60 70 80 0 50 100 150 LW MW SW Required 200 G re y V a lu e 250 USING CAMERAS TO STUDY ANIMAL COLORATION 225 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 crimination within and between images, if all images to be analysed are relatively dark then it would be advisable for the maximum pixel value within the dataset to be 255. An important issue is that of saturation. With regards to the above calibration results (Figs 6, 7, 8), we maintained an integration time of 1/30 s and a lens aperture of f/8.0. This resulted in images that were slightly under-exposed and guarded against the serious problem of saturation. Saturation (also known as ‘clipping’; Lauziére et al., 1999) occurs when the light levels arriving at the sensors reaches an upper limit, above which any more photons are not registered. This can be a serious problem because it prevents measurements of the true value that the pixels would have reached had saturation not occurred; a problem recognized in some studies (Hong et al., 2001). The effects of saturation are easy to find, with saturated pixels in the original image yielding values of approximately 255, with little or no stan- dard deviation. For example, images taken under similar circumstances, but with an integration time of 1/15 s produce results that at nominal reflection values of 75%, the red channel ceases to rise in pixel values. This is due to the effects of saturated pixels in the original image in the red channel, which causes the calibration to fail, since the linearization becomes ineffective and the equalization procedure results in the red channel grey values dropping away at higher reflection values (Fig. 9). These problems can be avoided by changing the exposure/integration time (t), or altering the intensity of the light source, because these determine the flux of light reaching the camera’s sensors (Hong et al., 2001). However, if the exposure is to be changed between images it is impor- tant to test that the response of the camera is the same at all exposure settings, otherwise a separate calibration will need to be performed for every change in exposure. Therefore, where possible, it is recom- mended that the aperture value, at least, is kept constant (Hong et al., 2001). It is often the case that the red channel of a digital camera is the first to saturate (as was the case with our camera, even when using a light source biased towards shorter wavelengths of light; Fig. 9), possibly because the sensors in some cameras may be biased to appeal to human perceptions, with increasing red channel values giving the perception of warmth. This may be particularly deleterious for studies investigat- ing the content of red signals (Frischknecht, 1993; Wedekind et al., 1998), which are widespread because of the abundance of carotenoid-based signals in many taxa (Grether, 2000; Pryke, Lawes & Andersson, 2001; Bourne, Breden and Allen, 2003; Blount, 2004; McGraw & Nogare, 2004; McGraw, Hill & Parker, 2005) and theories linking carotenoid signals to immune function (Koutsos et al., 2003; McGraw & Ardia, 2003; Navara & Hill, 2003; Grether et al., 2004; McGraw & Ardia, 2005). Some cameras are also biased in their representation of relatively short wave- lengths, to compensate for a lack of these wavelengths in indoor lights (Lauziére et al., 1999). Figure 8. The greyscale values measured for the set of reflectance standards following the process of RGB channel equalization and scaling, showing a close fit to the required values. LW, longwave; SW, shortwave; MW, mediumwave. 0 10 20 30 40 % Reflection 50 60 70 80 0 50 100 150 LW MW SW Required 200 G re y V a lu e 250 226 M. STEVENS ET AL. © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 SELECTING/CONTROLLING LIGHT CONDITIONS To some extent, the importance of selecting standard- ized lighting conditions and distances depends upon the calibration required. Lighting conditions should be as stable, standardized, and consistent as possible for each photo if measurements of reflection are desired, especially if photographs of standards are taken only at the beginning and end of sessions. However, when photographing natural scenes and using measures of photon catch, for example, lighting conditions are likely to vary considerably. This may in fact be an important part of the study: to include information about the ambient light. Generally, it is often best to avoid flashguns because the output of these is difficult to measure and may be variable; however, a high-end flash with good light diffusers may be fine. If using a flash, putting a grey standard(s) of known reflectance into the part of the scene interested in should allow good recovery of reflectance, even if the illumination conditions vary in an uncontrolled manner, although these standards may need to be included in every image rather than just at the start/end of sessions. Therefore, using a flash may be acceptable if one is just interested in reflectance, but should be avoided if Figure 9. The greyscale values measured for the set of reflectance standards following the process of linearization (A) and then RGB channel equalization (B) and scaling, showing that the linearization does not produce a linear response when there are saturated pixels in the image, as is the case in the R channel in this example. Saturated pixels also result in a poor equalization result, indicated by a dropping off of the R channel at higher values. 0 10 20 30 40 % Reflection 50 60 70 80 0 50 100 150 Required A Red Green Blue200 G re y V a lu e 250 0 10 20 30 40 50 60 70 80 0 50 100 150 Required B Red Green Blue200 G re y V a lu e 250 USING CAMERAS TO STUDY ANIMAL COLORATION 227 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 you are interested in the behaviour of natural illumi- nation (e.g. shadows). MAPPING TO CAMERA-INDEPENDENT MEASURES Having used the coefficients obtained in the lineariza- tion procedure to linearize the RGB values in the images obtained, the next step is to transform them to camera-independent values. This is because the R, G, and B data, whether in radiance or reflectance units, are specific to the wavebands designated by the cam- era sensors’ spectral sensitivity curves (Fig. 1C). This may be sufficient for some research purposes; for example, if the sensitivities of the camera’s sensors broadly correspond to the bandwidths of interest. However, it will often be desirable, either because a specific visual system is being modelled (e.g. human, bird), or simply to facilitate comparison of the results across studies, to transform the camera-specific RGB values to camera-independent measures. In human studies, these are frequently one of the sets of three- coordinate representations devised by the CIE for colour specification and/or matching. Different three- variable representations have been devised to approx- imate colour-matching for images illuminating only the M-L-cone-rich central fovea, or wider areas of the retina; for presentation of images on video display units or printed paper; or representations that incor- porate the colour balance arising from a specific illu- minant, or are illumination independent (Wyszecki & Stiles, 1982; Mollon, 1999; Westland & Ripamonti, 2004). The advantage is that all these metrics are pre- cisely defined, the formulae downloadable from the CIE website, and the values in one coordinate system can be transformed to another. Westland & Ripamonti (2004) provide formulae and downloadable MATLAB (The Mathworks Inc.) code for such transformations. Another possible camera-independent transforma- tion is to map the linearized RGB values to the spectral sensitivities of the photoreceptors of either humans (Párraga et al., 2002) or nonhuman species. In the case of RGB radiance measures, this corre- sponds to calculating the photon catches of an animal’s photoreceptors, rather than the camera’s sen- sors, when viewing a particular scene. In the case of RGB reflectance measures, this can be thought of as a mapping to a species-specific estimate of reflectance in the wavebands to which the animal’s photoreceptors are sensitive. Both types of mapping are particularly relevant to studies involving nonhuman animals, where accurate psychophysical estimates of colour- matching, of the sort used to calculate human- perceived colour from camera data, are not usually available. For such mapping to be viable, it is not nec- essary that the species’ cone spectral sensitivities match those of the camera’s sensors particularly closely (e.g. this is not true for humans; compare Fig. 1A, C). However, for the transformation to pro- duce reliable data, the species’ overall spectral range has to fall within that of the camera, and the species has to have three or less photoreceptors. For example, one can map RGB data to the lower dimensional colour space of a dichromatic dog (with one short- and one medium/long-sensitive cone type; Jacobs, 1993), but a camera with sensitivities such as that shown in Fig. 1C can never capture the full colour world of a trichromatic bee (with UV, short-, and medium-wave photoreceptors; Chittka, 1992). Mapping RGB data to a bird’s colour space would appear to be invalid on two counts: birds have a broader spectral range than a conventional camera (often extending into the UV-A) and are potentially tetrachromatic (Cuthill et al., 2000b). However, if the scenes or objects of interest lack UV information, then a mapping from RGB to avian short-, medium-, and long-wave cone sensitivi- ties can be achieved. We present the method here, which can be used for any analogous trichromatic sys- tem (e.g. human) or, with simple modification, a lower- dimensional system of the type that is typical for most mammals (Jacobs, 1993). Subsequently, we consider how UV information from a separate imaging system can be combined with the RGB data to provide a com- plete representation of bird-perceived colour. The goal is to predict the quantal catches, Qi, of a set of i photoreceptors (where i ≤ 3), given a triplet of cam- era-sensor-estimated radiance values, QR, QG, and QB, derived from the calibration and linearization process described above. This amounts to solving a set of simultaneous regression equations, which are likely to be nonlinear. Mappings can be peformed for more than three photoreceptor classes, provided that the spectral sensitivities of all types are covered by the spectral range of one or more of the camera’s sensors. For example, a mapping could be produced to calculate images corresponding to the longwave, mediumwave, and shortwave cones of a bird’s visual system, plus a luminance image based on avian double cone sensitiv- ity. Once mapped images have been obtained, further calculations also allow the production of images cor- responding to various opponency channels. Westland & Ripamonti (2004) summarize their, and other, research on the family of equations most likely to pro- vide a good fit to data, and conclude that linear models (with interaction terms) of the following type perform well. For ease of interpretation, we use the notation R, G, and B to describe the camera pixel values rather than their calibrated and linearized equivalents, QR, QG, and QB. Qi = bi1R + bi2G + bi3B + bi4RG + bi5RB + bi6GB + bi7RGB (8) 228 M. STEVENS ET AL. © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 Where bi are coefficients specific to receptor i, and the curve is forced through the origin (when the calibrated camera sensor value is zero, the animal’s quantal catch is zero). In some cases, dependent on the camera and the nature of the visual system to which mapping is required, polynomials (i.e. including terms in R2, G2, and B2, or higher orders) may provide a significantly better fit (and did in our case); this should be investi- gated empirically. Cheung et al. (2004) note that even mapping functions of unconstrained form, obtained using neural networks applied to large datasets, do not significantly outperform polynomials. The data required to estimate the coefficients for the i photore- ceptors can either be radiances directly measured using an imaging spectroradiometer (Párraga, 2003) or, more conveniently, radiances approximated as the product of reflectance spectra and the irradiance spec- trum of the illuminant. Using eqn. 8, applied to a trichromat, 3 × 7 coefficients need to be estimated, so the number of radiance spectra must be considerably greater than this (> 100 in our experience as a mini- mum, but closer to a 1000 is better). Large numbers of radiance spectra can be obtained from internet data- bases (Parkkinen, Jaaskelainen and Kuittinen, 1988; Sumner & Mollon, 2000). The coefficients for each pho- toreceptor are then found by multiple regression (or, conveniently, if using MATLAB, by matrix algebra; Westland & Ripamonti, 2004). Although, in principle, one could derive a mapping function (i.e. set of coeffi- cients) for all possible natural spectra, viewed under all possible illuminants, greater precision can be achieved by determining a situation-specific mapping function for the research question at hand. For example, if the goal is to use a camera to quantify the coloration of orange to red objects under blue skies, then a very pre- cise mapping function could be estimated by using radi- ance data calculated only from the reflectance spectra of orange to red objects viewed under blue sky irradi- ance. If one is to derive the mapping functions by cal- culation (i.e. calculate quantal catch for camera and desired cone sensitivities, using reflectance and irra- diance data), then the sensitivity of the camera’s sen- sors is required. However, one could also derive the mapping empirically without ever measuring camera sensor sensitivities, by measuring the response of the camera’s three channels to different (known) radiance spectra, and by determining the response of the cones of the required animal’s visual system. To achieve accu- rate mapping, the camera’s response would have to be measured for many hundreds of radiance spectra and this would be time-consuming, involving many stimuli. UV IMAGING In our own research, we wished to quantify lepi- dopteran wing patterns, with respect to avian vision, so we also needed to measure the amount of reflection in the avian-visible UV waveband. At the same time as RGB photography, images of the reflectance standards and the lepidopterans were taken with a UV sensitive video camera (see Appendix 2). First, we tested whether the camera was linear with respect to both changes in the integration time, and with respect to increases in the reflection value; being a high-specification technical camera, this was indeed the case. This meant that the only calibrations needed were to scale the images to between 0 and 255; which is not initially as easy as it sounds because the cali- brations have to account for different gain and the integration times. Figure 10 provides an example of the results for the UV calibration process. In most sit- uations, it will be simpler to maintain the same gain values because this reduces the number of factors to consider in the calibration process. If images are obtained from more than one camera, there is an additional consideration that must be addressed; that of ‘image registration’. Images derived from one RGB camera will all be the same angle and distance from the specimens, and so the objects pho- tographed will be on an identical scale in each of the three channels, based on the interpolations imple- mented. This may not be the case if obtaining images from a second camera; such as in our study, meaning that the specimens were a different size in the photo- graphs and would not necessarily be easy to align with the RGB images. Furthermore, one camera may pro- duce images with a lower resolution, and with less high frequency information; different cameras will have different Nyquist frequencies, meaning that although aligning lower spatial frequency patterns may be relatively easy, information may be lost or poorly aligned at higher frequencies. One potential approach is to use Fourier filtering to remove the high- est spatial frequency information from those images that contain it, down to the highest frequencies con- tained in the images from the other camera. However, this may be undesirable if the high spatial frequency information is important, as it frequently will be with complex patterns, or where edge information between pattern components is critical. The task of aligning images is made easier if: (1) different cameras are set up as closely as possible, in particular with relation to the angle of photography because this is the hardest factor to correct and (2) rulers are included in at least a sample of the images, so they can be rescaled to ensure specimens occupy the same scale in different images. Including rulers in images allows for true dis- tance measurements to be obtained and for spatial investigations to be undertaken. If images from one camera are larger than those from another, then it is the larger images that should be scaled down in size because this avoids artefactual data, generated by USING CAMERAS TO STUDY ANIMAL COLORATION 229 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 interpolation, if images are rescaled upwards. Once the objects in the photographs are of the same size, it may be a relatively trivial task to take measurements from the different images that directly correspond. However, if the images are still difficult to align then an automated computational approach can be used. A variety of these are available, and users should care- fully consult available manuals/information for the corresponding software to be sure of how the registra- tion is completed, and to check what changes may occur to the image properties. However, in many cases, changes to the image structure will probably be small, especially at lower spatial frequencies, and have little influence on the results. One such plug-in, for the free- ware software Image J (Rasband, 1997–2006; Abrà- moff et al., 2004), is ‘TurboReg’ (available via a link from the Image J website) (Thévenaz, Ruttimann & Unser, 1998), which comes with a variety of options to align sets of images. HOW BEST TO USE COLOUR STANDARDS A crucial step in calibrating a digital camera is to include colour standards in some or all of the photo- graphs taken. Including a set of colour standards in each photo allows calibrations to be derived for each individual photo, which would be highly accurate. However, in most cases, this is impractical and unnec- essary. For example, when the light source used is con- sistent, a set of reflectance standards used to fit a calibration curve need only be included in photos at the start and end of a session. Including these in each photo may leave little space for the objects of interest. By contrast, in many cases, such as when photograph- ing natural scenes where the illuminating light may change and when wishing to calculate values such as photon catches, it may be important to include at least one grey standard in the corner of each photo. Possibly the best objects to include in a photo are Spectralon reflectance standards (Labsphere Inc.), which reflect a known amount of light equally at all wavelengths in the UV and human visible spectrum. However, these are expensive and easily damaged, and if a single standard is sufficient, a Kodak grey card (Eastman Kodak Company), which has an 18% reflectance, can be included, which is relatively inexpensive. SPATIAL MEASUREMENTS Often, we do not wish to measure solely the ‘colour’ of a patch, but the area or shape of a region of interest. In principle, this sounds easy but has several complica- tions. For example, the colour boundary of an area vis- ible to humans may not be exactly the same as for that of another animal. Additionally, there may be colours that we cannot see (such as UV) that have different boundaries to those visible by a human (although most colour patches generally have the same boundary for different colour bands, such as UV, SW, MW, and LW). Another problem corresponds to the acuity of the ani- mal in question. Regions of interest with complex boundaries may be only discernable by animals with a high enough spatial acuity. Furthermore, there is a specific problem with gradual boundaries, particularly relating to defining where the actual edge of the colour region is. Figure 10. The effect of scaling the ultraviloet (UV) images obtained with the PCO Variocam camera and Nikon UV transmitting lens, showing a close fit to the required values. 0 10 20 30 40 50 60 70 80 0 50 100 150 Gain 12db Gain 24db Required 200 G re y V a lu e % Reflection 250 230 M. STEVENS ET AL. © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 There are several ways to address these issues and one must remember that the image processing steps that facilitate patch size or shape measurement may interfere with the accurate measurement of patch colour per se (e.g. by enhancing contrast between patches). One method of determining the boundary of a colour patch is to produce an automated procedure to define a specific area of interest. This can be done by thresholding an 8-bit or colour image to a binary (black and white) image where each individual pixel has a value of either one (white) or zero (black) (Fig. 11). This can be performed by writing a custom programme where the threshold level is defined specifically by the user, preferably based on an explicit assumption or data. Otherwise, most imaging software has automatic thresholding algorithms, although it is not always known what the thresholding value used will be. A different method that can be used to define an area of interest is that of edge detection. This is where an algorithm is used to determine edges in an image, corresponding to sharp changes in intensity (either luminance or in terms of individual colour channels). These edges may, for example, be found at the bound- ary of a colour patch (Fig. 11). The useful thing about edge detection algorithms is that they can be opti- mized and not linked to any specific visual system, or they correspond to the way in which real visual sys- tems work (Marr & Hildreth, 1980; Bruce, Green & Georgeson, 2003; Stevens & Cuthill, 2006). Once the boundary of a colour patch has been defined, it is simple to measure the area of the patch. Measuring the shape of an object is more difficult, although imaging software often comes with algo- rithms to measure attributes such as the relative cir- cularity of an area and, occasionally, more advanced shape analysis algorithms. DRAWBACKS TO USING DIGITAL IMAGES The most notable drawback is that the information obtained is not wavelength specific (i.e. it is known what wavelengths contribute to each channel, but not the contribution of any specific wavelength to the RGB value of any one pixel). This drawback can be over- come by so-called multispectral imaging (or, if the number of wavebands is high, ‘hyperspectral imag- ing’). This can involve rotating a set of filters in front of the lens, allowing the acquisition of successive images of different wavebands (Brelstaff et al., 1995; Lauziére et al., 1999; Angelopoulou, 2000; Stokman et al., 2000; Losey, 2003). This method may be partic- ularly useful if detailed wavelength information is required, or if the visual system of the receiver that the signal is aimed at is poorly matched by the sensi- tivity of an RGB camera. We do not cover this tech- nique here because, although it combines many of the advantages of spectrometry with photography, the technology is not practical for most behavioural and evolutionary biologists. Hyperspectral cameras are often slow because they may have to take upwards of 20 images through the specified spectral range. The equipment, and controlling software, must be con- structed de novo and conventional photography’s advantage of rapid, one-shot, image acquisition is lost. The specimens must be stationary during the proce- dure because movement can cause problems with image registration. Also, as Losey (2003) acknowl- edges, images obtained sequentially in the field may be subject to short-term variations in environmental conditions, and thus introduce considerable noise. Averaging the values obtained from multiple frames of the same waveband may help to eliminate some of this effect (Losey, 2003). PROBLEMS WITH USING THE AUTOMATIC CAMERA SETTINGS Many studies of animal coloration utilizing cameras apparently use the camera with its automatic set- tings. There are numerous problems that can arise when using the ‘auto’ mode. The main problem is that the settings used by the camera are adjusted accord- ing to the scene being photographed and so may be inconsistent. In general, camera manufacturers are interested solely in selling cameras and therefore want to produce pictures that look aesthetically ‘good’ by enhancing some of the images’ colours and contrasts and, thus, automatic modes are generally compatible with this objective. Given that an automat- ically set white balance changes between photos, it gives rise to different ratios between the LW, MW, and SW sensor responses. This need not always be an irre- trievable flaw but would almost certainly need some highly complex calibration procedures to recover con- sistent data, such as calibrating every single combina- tion of white balance, contrast enhancement and aperture setting modes. Any low- to mid-range camera is likely to have some white balancing present, and most mid-range cameras will give the option to man- ually set the white balance. If the camera does not allow this option and there is no indication of this in the manual, then changing the white-balance settings may not be possible. An additional problem with auto- matic settings is that calibration curves/settings could also change at different aperture settings; this may not always be the case but, when using the automatic mode, there is there additional complication that the aperture and exposure (integration) time may change significantly simultaneously, leading to unnecessarily complicated calibrations if values of reflection, for example, are required. The aperture selected by the USING CAMERAS TO STUDY ANIMAL COLORATION 231 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 Figure 11. Different images of a clouded yellow butterfly Colias croceus, modified to show regions of interest, such as the wing spots, identified by various techniques. A, the original 8-bit grey-level image (pixel values between 0 and 255). B, the image after an edge detection algorithm has been applied, clearly identifying a boundary around the two forewing spots, but not the hindwing spots. C, the original image after being thresholded to a binary (black/white) image with a threshold of 64. This clearly shows the forewing spots but does not produce spots where the hindwing spots were in the original image. D, the original image when converted to a binary image with a threshold of 128, now picking out both the forewing and hingwing spots (although with some ‘noise’ around the hindwing spots). E, the original image converted to a binary image with a threshold of 192, now not showing any clear wing spots. F, the original image when first converted to a pseudocolour image, where each pixel value falling between a given range is given a specific colour. The image is then reconverted to a grey-level image and now shows the hindwing spots with marginally sharper edges than in the original image. A B C D E F 232 M. STEVENS ET AL. © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 camera will also affect the quality of the image, par- ticularly depth of field. Another potentially serious problem with using the auto mode is that the photo- graph will not optimize the dynamic range of the scene photographed, meaning that some parts of the scene may be underexposed or, far more seriously, saturated. CONCLUSIONS One of the earliest studies to outline how digital image analysis can be used to study colour patterns is that of Windig (1991), with an investigation of lepidopteran wing patterns. Windig (1991) used a video camera, connected to a frame grabber to digitize the images for computer analysis, a similar method to that which we used to capture the UV sensitive images. Windig (1991) stated that the method was expensive, and the programmes were highly complex but, today, flexible user friendly software is available, with various free- ware programmes downloadable off the internet, and the purchase of a digital camera and software is pos- sible for a fraction of the cost of setup used by Windig (1991). Windig (1991) argued that any image analysis pro- cedure should meet three criteria. First, completeness: a trait should be quantified with respect to all charac- ters, such as ‘colour’ and area. Our procedure meets this criterion because reflection, plus spatial meas- urements, are attainable. Second, the procedure needs to be repeatable. This was also the case with our approach because the calibrations for a set of images of reflectance standards were still highly accurate for other images taken under the same conditions, but at different times. Finally, the process should be fast rel- ative to other available methods, as was our study, with potentially hundreds of images taken in a day, quickly calibrated with a custom MATLAB pro- gramme and then analysed with the range of tools available in Image J. Another advantage of capturing images with a dig- ital camera is that there are potentially a host of other noncolour analyses. Detailed and complex measure- ments of traits can be undertaken rapidly, with mea- surements and calculations that would normally be painstakingly undertaken by hand performed almost instantaneously in imaging software, including mea- surements of distances, areas, and analysis of shapes, plus complex investigations such as Fourier analysis (Windig, 1991). This may be particularly useful if han- dling the specimens to take physical measurements is not possible. The use of digital technology in studying animal col- oration is a potentially highly powerful method, avoid- ing some of the drawbacks of other techniques. In future years, advances in technology, software, and our understanding of how digital cameras work will add further advantages. It is already possible to extract data of a scene from behind a plane of glass (Levin & Weiss, 2004), which could become useful for studies of aquatic organisms (although most glass filters out UV wavelengths; Lauziére et al., 1999). Techniques are also being developed to remove the shadows from images; shadows can make edge recog- nition more difficult (Finlayson, Hordley & Drew, 2002), and hinder tasks such as image registration. With the explosion in the market of digital photogra- phy products, and the relatively low cost to purchase such items, there is the temptation to launch into using such techniques to study animal signals, with- out prior investigation into the technicalities of using such methods. This could result in misleading results. Therefore, although digital photography has the potential to transform studies of coloration, caution should be implemented and suitable calibrations developed before such investigations are undertaken. KEY POINTS/SUMMARY Below is a list of some of the main points to consider if using cameras to study animal coloration. 1. Images used in an analysis of colour should be either RAW or TIFF files and not JPEGs. 2. Grey reflectance standards should be included in images at the start of a photography session if the light source is constant, or in each image if the ambient light changes. 3. It is crucial not to allow images to become satu- rated or underexposed because this prevents accu- rate data being obtained. 4. Many cameras have a nonlinear response to changes in light intensity, which needs linearizing before usable data can be obtained. 5. To produce measurements of reflectance, the response of the R, G, and B colour channels needs to be equalized with respect to grey reflectance standards. 6. Measurements of cone photon catches correspond- ing to a specific visual system can be estimated by mapping techniques based upon sets of radiance spectra and camera/animal spectral sensitivity. 7. Digital images can be incorporated into powerful models of animal vision. 8. Do not convert image data to formats such as HSB, which are human-specific and inaccurate. Instead, use reflection data, calculations of photoreceptor photon catches or, if working on human-perceived colour, well-tested colour spaces such as CIE. 9. If using more than one camera, image registration may be a problem, especially if the different cam- eras have different resolutions. This problem can be minimized by setting up different cameras as USING CAMERAS TO STUDY ANIMAL COLORATION 233 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 close to one another as possible and ensuring that one camera does not capture significantly higher levels of spatial detail than the other. 10. Digital imaging is also a potentially highly accu- rate and powerful technology to study spatial patterns. ACKNOWLEDGEMENTS We are very grateful to D. J. Tolhurst and G. P. Lovell for help with much of the project. C.A.P. was sup- ported by BBSRC grants S11501 to D. J. Tolhurst and T.S.T., and S18903 to I.C.C., T.S.T., and J.C.P. M.S. was supported by a BBSRC studentship. REFERENCES Abràmoff MD, Magalhäes PJ, Ram SJ. 2004. Image processing with Image J. Biophotonics International 7: 36– 43. Angelopoulou E. 2000. Objective colour from multispectral imaging. International Conference on Computer Vision 1: 359–374. Barnard K, Funt B. 2002. Camera characterisation for color research. Color Research and Application 27: 152–163. Bennett ATD, Cuthill IC, Norris KJ. 1994. Sexual selection and the mismeasure of color. American Naturalist 144: 848– 860. Blount JD. 2004. Carotenolds and life-history evolution in animals. Archives of Biochemistry and Biophysics 430: 10– 15. Bortolotti GR, Fernie KJ, Smits JE. 2003. Carotenoid concentration and coloration of American Kestrels (Falco sparverius) disrupted by experimental exposure to PCBs. Functional Ecology 17: 651–657. Bourne GR, Breden F, Allen TC. 2003. Females prefer car- otenoid colored males as mates in the pentamorphic live bearing fish, Poecilia parae. Naturwissenschaften 90: 402– 405. Brelstaff GJ, Párraga CA, Troscianko T, Carr D. 1995. Hyperspectral camera system: acquisition and analysis. In: Lurie JB, Pearson J, Zilioli E, eds. SPIE − human vision, visual processing and digital displays geographic informa- tion systems, photogrammetry, and geological/geophysical remote sensing. Paris: Proceedings of the SPIE, 150–159. Bruce V, Green PR, Georgeson MA. 2003. Visual percep- tion, 4th edn. Hove: Psychology Press. Cardei VC, Funt B. 2000. Color correcting uncalibrated dig- ital images. Journal of Imaging Science and Technology 44: 288–294. Cardei VC, Funt B, Barnard K. 1999. White point estima- tion for uncalibrated images. Proceedings of the IS and T/ SID seventh color imaging conference: color science systems and applications, 97–100. Cheung V, Westland S, Connah D, Ripamonti C. 2004. A comparative study of the characterisation of colour cameras by means of neural networks and polynomial transforms. Coloration Technology 120: 19–25. Chittka L. 1992. The colour hexagon: a chromaticity diagram based on photoreceptor excitations as a generalised repre- sentation of colour opponency. Journal of Comparative Phys- iology A 170: 533–543. Cooper VJ, Hosey GR. 2003. Sexual dichromatism and female preference in Eulemur fulvus subspecies. Interna- tional Journal of Primatology 24: 1177–1188. Cott HB. 1940. Adaptive colouration in animals. London: Methuen Ltd. Cuthill IC, Bennett ATD, Partridge JC, Maier EJ. 1999. Plumage reflectance and the objective assessment of avian sexual dichromatism. American Naturalist 153: 183–200. Cuthill IC, Hart NS, Partridge JC, Bennett ATD, Hunt S, Church SC. 2000a. Avian colour vision and avian video playback experiments. Acta Ethologica 3: 29–37. Cuthill IC, Partridge JC, Bennett ATD, Church SC, Hart NS, Hunt S. 2000b. Ultraviolet vision in birds. Advances in the Study of Behaviour 29: 159–214. D’Eath RB. 1998. Can video images imitate real stimuli in animal behaviour experiments? Biological Reviews 73: 267– 292. Dartnall HJA, Bowmaker JK, Mollon JD. 1983. Human visual pigments: microspectrophotometric results from the eyes of seven persons. Proceedings of the Royal Society of London Series B, Biological Sciences 220: 115–130. Efford N. 2000. Digital image processing: a practical introduc- tion using JAVA. Harlow: Pearson Education Ltd. Endler JA. 1984. Progressive background matching in moths, and a quantitative measure of crypsis. Biological Journal of the Linnean Society 22: 187–231. Endler JA. 1990. On the measurement and classification of colour in studies of animal colour patterns. Biological Jour- nal of the Linnean Society 41: 315–352. Endler JA, Mielke PW Jr. 2005. Comparing color patterns as birds see them. Biological Journal of the Linnean Society 86: 405–431. Finlayson GD, Hordley SD, Drew MS. 2002. Removing shadows from images. European Conference on Computer Vision 4: 823–836. Finlayson GD, Tian GY. 1999. Color normalisation for color object recognition. International Journal of Pattern Recogni- tion and Artificial Intelligence 13: 1271–1285. Fleishman LJ, Endler JA. 2000. Some comments on visual perception and the use of video playback in animal behavior studies. Acta Ethologica 3: 15–27. Fleishman LJ, McClintock WJ, D’Eath RB, Brainard DH, Endler JA. 1998. Colour perception and the use of video playback experiments in animal behaviour. Animal Behav- iour 56: 1035–1040. Frischknecht M. 1993. The breeding colouration of male three-spined sticklebacks (Gasterosteus aculeatus) as an indicator of energy investment in vigour. Evolutionary Ecol- ogy 7: 439–450. Gerald MS, Bernstein J, Hinkson R, Fosbury RAE. 2001. Formal method for objective assessment of primate color. American Journal of Primatology 53: 79–85. Goda M, Fujii R. 1998. The blue coloration of the common surgeonfish, Paracanthurus hepatus − II. Color revelation and color changes. Zoological Science 15: 323–333. 234 M. STEVENS ET AL. © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 Gonzalez RC, Woods RE, Eddins SL. 2004. Digital image processing using MATLAB. London: Pearson Education Ltd. Grether GF. 2000. Carotenoid limitation and mate preference evolution: a test of the indicator hypothesis in guppies (Poecilia reticulata). Evolution 54: 1712–1724. Grether GF, Kasahara S, Kolluru GR, Cooper EL. 2004. Sex-specific effects of carotenoid intake on the immunologi- cal response to allografts in guppies (Poecilia reticulata). Proceedings of the Royal Society of London Series B, Biolog- ical Sciences 271: 45–49. Hanselman D, Littlefield B. 2001. Mastering MATLAB 6: a comprehensive tutorial and reference. Upper Saddle River, NJ: Pearson Education International. Hart NS, Partridge JC, Cuthill IC. 1998. Visual pigments, oil droplets and cone photoreceptor distribution in the Euro- pean starling (Sturnus vulgaris). Journal of Experimental Biology 201: 1433–1446. Hendrickson A, Drucker D. 1992. The development of parafoveal and mid-peripheral human retina. Behavioural Brain Research 49: 21–31. Hong G, Lou RM, Rhodes PA. 2001. A study of digital camera colorimetric characterization based on polynomial modeling. Color. Research and Application 26: 76–84. Hunt BR, Lipsman RL, Rosenberg JM, Coombes HR, Osborn JE, Stuck GJ. 2003. A guide to MATLAB: for beginners and experienced users. Cambridge: Cambridge University Press. Hurlbert A. 1999. Colour vision; is colour constancy real? Cur- rent Biology 9: R558–R561. Jacobs GH. 1993. The distribution and nature of colour vision among the mammals. Biological Reviews 68: 413–471. Kelber A, Vorobyev M, Osorio D. 2003. Animal colour vision − behavioural tests and physiological concepts. Biolog- ical Reviews 78: 81–118. Kodric-Brown A, Johnson SC. 2002. Ultraviolet reflectance patterns of male guppies enhance their attractiveness to females. Animal Behaviour 63: 391–396. Koutsos EA, Clifford AJ, Calvert CC, Klasing KC. 2003. Maternal carotenoid status modifies the incorporation of dietary carotenoids into immune tissues of growing chickens (Gallus gallus domesticus). Journal of Nutrition 133: 1132– 1138. Künzler R, Bakker TCM. 1998. Computer animations as a tool in the study of mating preferences. Behaviour 135: 1137–1159. Lauziére YB, Gingras D, Ferrie FP. 1999. Color camera characterization with an application to detection under day- light. Trois-Rivières: Vision Interface, 280–287. Levin A, Weiss Y. 2004. User assisted separation of reflec- tions from a single image using a sparsity prior. European Conference on Computer Vision 1: 602–613. Losey GS Jr. 2003. Crypsis and communication functions of UV-visible coloration in two coral reef damselfish, Dascyllus aruanus and D. reticulates. Animal Behaviour 66: 299–307. Lythgoe JN. 1979. The ecology of vision. Oxford: Clarendon Press. Marr D, Hildreth E. 1980. Theory of edge detection. Proceed- ings of the Royal Society of London Series B, Biological Sciences 207: 187–217. Marshall NJ, Jennings K, McFarland WN, Loew ER, Losey GS Jr. 2003. Visual biology of Hawaiian coral reef fishes. II. Colors of Hawaiian coral reef fish. Copeia 3: 455– 466. Martinez-Verdú F, Pujol J, Capilla P. 2002. Calculation of the color matching functions of digital cameras from their complete spectral sensitivities. Journal of Imaging Science and Technology 46: 15–25. McGraw KJ, Ardia DR. 2003. Carotenoids, immunocompe- tence, and the information content of sexual colors: an exper- imental test. American Naturalist 162: 704–712. McGraw KJ, Ardia DR. 2005. Sex differences in carotenoid status and immune performance in zebra finches. Evolution- ary Ecology Research 7: 251–262. McGraw KJ, Hill GE, Parker RS. 2005. The physiological costs of being colourful: nutritional control of carotenoid uti- lization in the American goldfinch, Carduelis tristis. Animal Behaviour 69: 653–660. McGraw KJ, Nogare MC. 2004. Carotenoid pigments and the selectivity of psittacofulvin-based coloration systems in parrots. Comparative Biochemistry and Physiology B 138: 229–233. Mollon JD. 1999. Specifying, generating and measuring colours. In: Carpenter RHS, Robson JG, eds. Vision research: a practical guide to laboratory methods. Oxford: Oxford Uni- versity Press, 106–128. Navara KJ, Hill GE. 2003. Dietary carotenoid pigments and immune function in a songbird with extensive carotenoid- based plumage coloration. Behavioral Ecology 14: 909–916. Newton I. 1718. Opticks, or a treatise of the reflections, refrac- tions, inflections and colours of light, 2nd edn. London: Printed for W. and J. Innys. Parkkinen J, Jaaskelainen T, Kuittinen M. 1988. Spectral representation of color images. IEEE 9th International Com- ference on Pattern Recognition, Rome, Italy 2: 933–935. Párraga CA. 2003. Is the human visual system optimised for encoding the statistical information of natural scenes? PhD Thesis, University of Bristol. Párraga CA, Troscianko T, Tolhurst DJ. 2002. Spatiochro- matic properties of natural images and human vision. Cur- rent Biology 12: 483–487. Pietrewicz AT, Kamil AC. 1979. Search image formation in the blue jay (Cyanocitta cristata). Science 204: 1332–1333. Pryke SR, Lawes MJ, Andersson S. 2001. Agonistic caro- tenoid signalling in male red-collared widowbirds: aggres- sion related to the colour signal of both the territory owner and model intruder. Animal Behaviour 62: 695–704. Rasband WS. 1997–2006. Image J. Bethesda, MD: National Institutes of Health. Available at http:/rsb.info.nih.gov/ij/. Rosenthal GG, Evans CS. 1998. Female preference for swords in Xiphophorus helleri reflects a bias for large appar- ent size. Proceedings of the National Academy of Sciences of the United States of America 95: 4431–4436. Samaranch R, Gonzalez LM. 2000. Changes in morphology with age in Mediterranean monk seals (Monachus mona- chus). Marine Mammal Science 16: 141–157. Stevens M, Cuthill IC. 2005. The unsuitability of html-based colour charts for estimating animal colours − a comment on Berggren & Merilä. Frontiers in Zoology 2: 14. USING CAMERAS TO STUDY ANIMAL COLORATION 235 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 Stevens M, Cuthill IC. 2006. Disruptive coloration, crypsis and edge detection in early visual processing. Proceedings of the Royal Society Series B, Biological Sciences 273: 2141– 2147. Stokman HMG, Gevers T, Koenderink JJ. 2000. Color measurement by imaging spectrometry. Computer Vision and Image Understanding 79: 236–249. Sumner P, Arrese CA, Partridge JC. 2005. The ecology of visual pigment tuning in an Australian marsupial: the honey possum Tarsipes rostratus. Journal of Experimental Biology 208: 1803–1815. Sumner P, Mollon JD. 2000. Catarrhine photopigments are optimised for detecting targets against a foliage background. Journal of Experimental Biology 203: 1963–1986. Thayer AH. 1896. The law which underlies protective colora- tion. Auk 13: 477–482. Thayer GH. 1909. Concealing-coloration in the animal king- dom: an exposition of the laws of disguise through color and pattern: being a summary of Abbott H. Thayer’s discoveries. New York, NY: The Macmillan Co. Thévenaz P, Ruttimann UE, Unser M. 1998. A pyramid approach to subpixel registration based on intensity. IEEE Transactions on Image Processing 7: 27–41. Tinbergen N. 1974. Curious naturalists, rev. edn. London: Penguin Education Books. Villafuerte R, Negro JJ. 1998. Digital imaging for colour measurement in ecological research. Ecology Letters 1: 151– 154. Wedekind C, Meyer P, Frischknecht M, Niggli UA, Pfander H. 1998. Different carotenoids and potential infor- mation content of red coloration of male three-spined stick- leback. Journal of Chemical Ecology 24: 787–801. Westland S, Ripamonti C. 2004. Computational colour science using MATLAB. Chichester: John Wiley & Sons Ltd. Westmoreland D, Kiltie RA. 1996. Egg crypsis and clutch survival in three species of blackbirds (Icteridae). Biological Journal of the Linnean Society 58: 159–172. Windig JJ. 1991. Quantification of Lepidoptera wing patterns using an image analyzer. Journal of Research on the Lepi- doptera 30: 82–94. Wyszecki G, Stiles WS. 1982. Color science: concepts and methods, quantitative data and formulae, 2nd edn. New York, NY: John Wiley. Yin J, Cooperstock JR. 2004. Color correction methods with applications for digital projection environments. Journal of the Winter School of Computer Graphics 12: 499–506. Zuk M, Decruyenaere JG. 1994. Measuring individual vari- ation in colour: a comparison of two techniques. Biological Journal of the Linnean Society 53: 165–173. APPENDIX 1 GLOSSARY OF TECHNICAL TERMS Aliasing When different continuous signals become indistin- guishable as a result of digital sampling. Spatial alias- ing is manifested as the jagged appearance of lines and shapes in an image. Aperture Aperture refers to the diaphragm opening inside a photographic lens. The size of the opening regulates the amount of light passing through onto the colour filter array. Aperture size is usually referred to in f- numbers. Aperture also affects the ‘depth of field’ of an image. Bit depth This relates to image quality. A bit is the smallest unit of data, such as 1 or 0. A 2-bit image can have 22 = 4 grey levels (black, low grey, high grey and white). An 8-bit image can have 28 = 256 grey levels, ranging from 0 to 255. Colour images are often referred to as 24-bit images because they can store up to 8 bits in each of the three colour channels and therefore allow for 256 × 256 × 256 = 16.7 million colours. Charge-coupled device (CCD) A small photoelectronic imaging device containing numerous individual light-sensitive picture elements (pixels). Each pixel is capable of storing electronic charges created by the absorption of light and produc- ing varying amounts of charge in response to the amount of light they receive. This charge converts light into electrons, which pass through an analogue- to-digital converter, which produces a file of encoded digital information. Chromatic aberration This is caused by light rays of different wavelengths coming to focus at different distances from the lens causing blurred images. Blue will focus at the shortest distance and red at the greatest distance. Colour filter array Each pixel on a digital camera sensor contains a light sensitive photodiode which measures the brightness of light. These are covered with a pattern of colour filters, a colour filter array, to filter out different wavebands of light. Demosaicing algorithms Most digital cameras sample an image with red, green, and blue sensors arranged in an array, with one type at each location. However, an image is required with an R, G, and B-value at each pixel location. This is produced by interpolating the missing sensor values via so called ‘demosaicing’ algorithms, which come in many types. Exposure The exposure is the amount of light received by the camera’s sensors and is determined by the aperture and the integration time. 236 M. STEVENS ET AL. © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 Foveon sensors Foveon sensors capture colour by using three layers of photosensors at each location. This means that no interpolation is required to obtain values of R, G, and B at each pixel. Image resolution The resolution of a digital image is the number of pixels it contains. A 5-megapixel image is typically 2560 pixels wide and 1920 pixels high and has a resolution of 4915 200 pixels. JPEG JPEG (Joint Photographic Experts Group) is very common due to its small size and widespread com- patibility. JPEG is a lossy compression method, designed to save storage space. The JPEG algorithm divides the image into squares, which can be seen on badly compressed JPEGs. Then, a discrete cosine transformation is used to turn the square data into a set of curves, and throws away the less significant part of the data. The image information is rear- ranged into colour and detail information, compress- ing colour more than detail because changes in detail are easier to detect. It also sorts detail information into fine and coarse detail, discarding fine detail first. Lossy compression A data compression technique in which some data is lost. Lossy compression attempts to eliminate redun- dant or unnecessary information and dramatically reduces the size of a file by up to 90%. Lossy compres- sion can generate artefacts such as false colours and blockiness. JPEG is an image format that is based on lossy compression. Lossless compression Lossless compression is similar to ‘zipping’ a file, whereby if a file is compressed and later extracted, the content will be identical. No information is lost in the process. TIFF images can be compressed in a lossless way. Macro lens A lens that provides continuous focusing from infinity to extreme close-ups. Modulation transfer function The modulation transfer function describes how much a piece of optical equipment, such as a lens, blurs the image of an object. Widely spaced features, such as broad black and white stripes, do not lose much con- trast, because a little blurring only affects their edges, but fine stripes may appear to be a uniform grey after being blurred by the optical apparatus. The modula- tion transfer function is a measure of how much bright-to-dark contrast is lost, as a function of the width of the stripes, as the light goes through the optics. Nyquist frequency The Nyquist frequency is the highest spatial fre- quency where the CCD can still correctly record image detail without aliasing. RAW A RAW file contains the original image information as it comes off the sensor before internal camera process- ing. This data is typically 12 bits per pixel. The cam- era’s internal image processing software or computer software can interpolate the raw data to produce images with three colour channels (such as a TIFF image). RAW data is not modified by algorithms such as sharpening. RAW formats differ between camera manufacturers, and so specific software provided by the manufacturer, or self written software, has to be used to read them. Saturation In the context of calibrating a digital camera, we use this term to denote when a sensor reaches an upper limit of light captured and can no longer respond to additional light. This is also called ‘clipping’ as the image value cannot go above 255 (in an 8-bit image) regardless of how much additional light reaches the sensor. Saturation can also be used to refer to the apparent amount of hue in a colour, with saturated colours looking more vivid. Sensor resolution The number of effective non-interpolated pixels on a sensor. This is generally much lower than the image resolution because this is before interpolation has occurred. TIFF TIFF (Tagged Image File Format) is a very flexible file format. TIFFs can be uncompressed, lossless com- pressed, or can be lossy compressed. While JPEG images only support 8 bits per channel RGB images, TIFF also supports 16 bits per channel and multilayer CMYK images in PC and Macintosh format. White balance Most digital cameras have an automatic white bal- ance setting whereby the camera automatically sam- ples the brightest part of the image to represent white. However, this automatic method is often inac- curate and is undesirable in many situations. Most digital cameras also allow white balance to be chosen manually. USING CAMERAS TO STUDY ANIMAL COLORATION 237 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 APPENDIX 2 TECHNICAL DETAILS In the present study, we used a Nikon Coolpix 5700 camera, with an effective pixel count of just under 5.0 megapixels. This does not have all of the desired features described in our paper (the intensity response is nonlinear and the zoom cannot be precisely fixed) and we offer no specific recommendation, but it is a good mid-priced product with high quality optics and full control over metering and exposure. UV photogra- phy was with a PCO Variocam, fitted with a Nikon UV- Nikkor 105 mm lens, a Nikon FF52 UV pass filter and an Oriel 59875 ‘heat’ filter (the CCD is sensitive to near-infra-red). The camera was connected to a Toshiba Satellite 100 cs laptop and also to an Ikegami PM-931 REV.A monitor, which displayed the images that were to be saved via a PCO CRS MS-DOS based programme. With the camera remote control, the gain and the integration time of the images could be adjusted, with the gain either set to 12 db or 24 db and the integration time between one and 128 video frames (1 frame = 40 ms). Images were transferred to a PC and all measure- ments were taken with the (free) imaging programme ‘Image J’ (Rasband, 1997–2006; Abràmoff et al., 2004). Measurements of standards were taken by drawing a box over the area of interest, and then using the his- togram function to determine the mean grey scale value and standard deviation for each channel. All other image and data manipulations, including the linearization and transformation between coordinate systems, were performed with MATLAB (The Math- works Inc.), although other languages, such as Java (Sun Microsystems, Inc.; Efford, 2000) are also useful. MATLAB has rapidly become an industry standard in vision science, on account of its efficiency at matrix mathematics and manipulation (photographic data are large matrices). MATLAB and Image J benefit from the large number of plug-ins and toolboxes writ- ten by users for other users.