key: cord-1056127-o44k9zll authors: Griffith, C. title: Chapter 44 Surface Sampling and the Detection of Contamination date: 2016-12-31 journal: Handbook of Hygiene Control in the Food Industry DOI: 10.1016/b978-0-08-100155-4.00044-3 sha: 0718398a18b54ebcdf24ac7372d3fa62153512b1 doc_id: 1056127 cord_uid: o44k9zll Abstract Cross-contamination is an increasingly important risk factor in food safety. Cleaning and disinfection regimens are essential components in its prevention but need to be validated, monitored, and verified. This in turn requires the implementation of protocols for surface sampling and the assessment of residual contamination. Visual assessment although widely used, in isolation, is ineffective but can be useful as part of an integrated approach. Microbial and nonmicrobial methods of sampling and testing are compared. Nonmicrobial assessment methods, especially ATP, are effective at monitoring residual surface soil. Traditional specific, and nonspecific, microbial methods indicate residual microbial contamination but not surface soil. Recent advances in molecular microbial methods and bioluminogenic tests are discussed. There is no single ideal surface test method and how, when, and where to sample are discussed within the framework of suggested guidelines, an integrated approach, and the use of trend analysis. environments leading to the contamination of RTE foods. Businesses have been encouraged to adopt a risk-based approach in assessing cross-contamination in their food operations, nevertheless it remains the Achilles heel of risk assessment (Griffith and Redmond, 2005) . To reduce the opportunities for cross-contamination cleaning requires effective management, although it is surprising, given their importance, that hand-contact surfaces are often omitted from cleaning schedules (Griffith and Redmond, 2005) . The possibility of a pathogen from the environment getting into food may be in the order of 70%, however, it is perhaps especially important for Listeria monocytogenes (Lm) . As stated by the International Life Sciences Institute Listeria monocytogenes in 70 (3.5%) environmental swabs and 16 (7.4%) products from a Swiss sandwich plant • Of the 86 isolates 93% were serotype 1/2a with six genetic profiles • 78% belonged to one genotype found on slicers, conveyors, tables, bread feeding machine, salmon and egg sandwiches • These strains persisted for more than 9 months on slicers and conveyors • Revision of cleaning programs solved the problem • Emphasizes importance of environmental monitoring to identify potential contamination problems and as early warning • Approximately 100 product samples were collected from the dairy's processing facility and adjacent retail store • One environmental swab from a floor drain in the finished product area, one skim milk sample, and seven flavored milk samples tested positive for L. monocytogenes and matched the outbreak strain by PFGE using the two restriction enzymes • Contamination with the outbreak strain was found in close proximity to areas where hoses were used to clean equipment • Illustrates the potential for cleaning equipment to cause cross-contamination (ILSI): "Lm may colonize a food processing unit and establish itself in a niche from where it may continuously or intermittently contaminate food" (ILSI, 2005) . Although not specifically mentioned in HACCP principles, floor diagrams/maps are very useful in assessing the potential for cross-contamination and are a requirement in standards such as the BRC (clauses 4.3.1 and 4.3.2) (2015) along with the need to design and construct premises and identify people and product flows to prevent cross-contamination taking place. Fig. 44 .3 illustrates a strategic approach that can be taken to cleaning management. A strategic approach to cleaning starts with the correct design, construction, operation (including work flow), and maintenance of equipment and premises. Collectively these will ensure that difficult-to-clean areas are eliminated, opportunities for cross-contamination are minimized, and that cleaning is more likely to be effective. Assuming these are appropriately considered, well-designed cleaning protocols/regimens need to be resourced, documented, and implemented and should provide a good basis for cleaning management. However, protocols on their own will not be successful or correctly implemented, without an appropriate compliance culture (Griffith, 2014) . Beyond the scope of this chapter this is an important food safety topic and of increasing interest. Management responsibility and commitment, in *Requirement for surface sampling/testing Design, construct, and maintain premises appropriately Construct cleaning protocol (policy, schedules/programs, and record forms) -Chemicals/methods, disassembly, etc. -Training required Validate cleaning protocols* -Establish benchmark clean values, using rapid and a range of microbiological testing, consider guidelines/reference values Determine suitable integrated test strategy based on: -Product risk, consequence of failure -Facilities available -Staff expertise -Level of specific chemical residues in food products -Test sensitivity in relation to product -Cost-benefit analysis -Need for microbiological testing. Routine or random -Likely presence of pathogens/indicator organisms -Potential for Listeria sp. to grow and survive in product Monitor cleaning* -Use integrated strategy (start with visual assessment) -Base sampling frequency/stringency on risk -Establish and implement corrective actions -Record test results and frequency of corrective actions -Utilize results in trend analysis Routine auditing/verification of*: -Cleaning protocols and their efficacy -Cross contamination potential both time and money, are important in ensuring successful cleaning and need to be evident. Unfortunately senior management may deny that poor cleaning is a problem (Czarneski et al., 2012) with the process of cleaning sometimes perceived of low importance, with cleaners poorly paid. Designing a cleaning regimen is best undertaken as the result of a site survey. This considers construction, production flows and type, frequency and sequence of cleaning, facilities available, shift patterns, types of food residues, etc. Documentation helps to maintain consistency and transparency associated with cleaning methods, is a requirement of certification standards, such as the BRC, and is usually based on standard operating procedures (SOPs). Typical cleaning documentation will include a policy statement, a schedule and procedures, detailed instructions on how to clean each area or piece of equipment, as well as record forms. Increasingly, the process is being supported by various software tools. Auditors may well ask to see both the cleaning programs, as well as results and trends obtained from monitoring, that is, the routine assessment of cleaning efficacy. Cleaning regimens need to be current and part of the documented control system. Cleaning documentation should comprehensively cover cleaning equipment and materials. It is logical that you cannot get something clean without getting something else dirty and this has implications for materials-equipment, water, etc. to be used in cleaning. Failure to maintain cleaning equipment appropriately can result in cleaning equipment being a vehicle for cross-contamination. One study (Christison et al., 2007) showed, using scanning electron microscopy, rods and cocci attached to cleaning tools were genetically identical to those isolated from cleaning tools and associated RTE foods. Although cleaning practices will vary, Table 44 .3 indicates the main stages likely to be involved in most wet cleaning regimens. The first three stages are designed to reduce surface soil, that is, cleaning, with stages 4-7, disinfection options. These latter stages are used to ensure residual surface microbial numbers are reduced to low or acceptable levels. One stage that is subject to debate is the need for rinsing after disinfection. The European Food Directives are sometimes unclear on rinsing: some state it should be undertaken but others allow it as an option, if it can be assured that there are no residual chemicals that can adversely affect food, people, or equipment. The main argument in favor of rinsing is the removal of cleaning chemicals and possibly reducing the chances of developing biocide resistance, but this needs to be weighed against the microbiological quality of available water (at point of use, not entry into the premises), the Used as additional disinfection step to try to eliminate any residual or persistent pathogens or spoilage organisms potential for recontamination of cleaned surfaces and the need to preserve a dry processing environment. In the United States, a number of sanitizers have approved limits for nonrinse application. If there are concerns about surface counts after cleaning, the use of these sanitizers first at higher levels, followed by rinsing, followed by their application at a norinse level has been suggested (Tompkin et al., 1999) . A further optional step used by some processors is to apply a gaseous bactericide, for example, ozone, hydrogen peroxide vapor (HPV), or chlorine dioxide as an additional "terminal disinfection" stage. Ozone, for example, can achieve an extra kill before decomposing to oxygen (Moore et al., 2000; Bailey et al., 2007) . Deciding if this is necessary is best left to the individual company, bearing in mind the type and concentration of cleaning chemicals used, local water quality, type of product, and the level of risk associated with it. It is important to realize, however, that the different stages in cleaning are interlinked and cumulatively help to ensure overall effectiveness. They can also inform how and when monitoring is best undertaken. Once the practicality and potential problems associated with cleaning implementation have been identified, a provisional cleaning plan can be designed. It is said that "you cannot manage what you do not measure" and after construction cleaning protocols need to be validated and verified. Validation means proving that the cleaning protocol is effective and is linked to the establishment of benchmark clean values. Validation will involve comprehensive testing of different aspects of the cleaning/disinfection protocols and can also help to identify difficult-to-clean areas and to inform the design of routine monitoring plans. All of these require some form of "efficacy testing or surface sampling to assess cleanliness" which is best performed using an integrated approach. Different assessment techniques each have advantages and disadvantages and provide different information on the cleaning performed. Along with efficacy testing, routine audits of cleaning can be an important part of the verification process. Cleaning is the removal of soil and this process may also reduce the number of microorganisms present. Disinfection is specifically used to further reduce the number of microorganisms present and can be achieved using heat, chemicals, or irradiation. Both cleaning and disinfection can be monitored, although readers are reminded that disinfection is much more difficult and less likely to be achieved if prior cleaning is inappropriately performed. Unfortunately there is no single "ideal" method (Table 44 .4) for assessing cleaning and disinfection efficacy-the testing approach selected must link back to the potential surface contamination, the hazards that the cleaning and disinfection program is intended to control and the level of cleanliness required for that surface. Most methods (microbiological and nonmicrobiological) can be affected by residual detergents or disinfectants and this needs to be considered in how and when to sample and the need to incorporate neutralizers into any wetting agents or reagents used. Fig. 44 .4 indicates the possible consequences and combinations of surface conditions after cleaning and, if necessary, disinfection. The reduction in organic residues ensures removal of food debris, allergens, etc. and helps reduce the number of microorganisms, as well as preparing the surface for any required disinfection. A low residual microbial surface count reduces the chances of food contamination and hence food spoilage and possibly foodborne disease. The presence or absence of residual moisture is important in helping to prevent cross-contamination, both by reducing the potential for future microbial growth and survival, and in reducing potential transfer rates. Transfer rates between surfaces can vary, from less than 1% to nearly 100% and are greatly increased in the presence of moisture (Harrison et al., 2003) . However, drying needs to be performed in a way that will not recontaminate the surface. Fig. 44 .5 outlines the various microbiological and nonmicrobiological methods that could be used to assess the efficacy of cleaning and/or disinfection and these will be discussed in more detail in Sections 44.3 and 44.4. Visual assessment has historically been the most widely used method to assess cleanliness and can use the unaided eye or make use of microscopy. At its simplest level the surface can be examined for the presence of visual soil-if the surface looks visually soiled then it has not been cleaned properly and further testing may be of limited value. However, absence of visual soil does not mean absence of invisible soil or microorganisms. The advent of microbiological swabbing in the early 1900s offered the only major alternative for routine use until the late 1980s. Since then alternative, rapid nonmicrobiological chemical detection methods, starting with adenosine triphosphate (ATP), have been developed . These methods detect specific chemical/organic residues rather than microorganisms. As cultivation is not required, only a rapid chemical reaction, the test results are available in seconds or minutes, rather than hours or days. These newer largely "nonmicrobiological" tests, for example, ATP, represent a truer assessment of cleanliness (absence of soil) rather than a microbial count. Soil can protect microorganisms, and therefore knowledge that the surface is free of soil provides some reassurance concerning the potential for microbial growth. Thus the philosophy of using nonmicrobiological tests is different, offering proactive cleanliness management with results available in time for corrective action to be taken (including recleaning) prior to surface use or subsequent disinfection. Microbial enumeration is reactive and may prove, by which time a product may have left the factory, that a surface was or was not microbiologically contaminated. Some traditional microbiologists still feel happier with assessing surface microbial contamination, although this approach is further challenged by increased concern over food allergies. If cleaning is inadequately performed, food allergens from one food may remain on a surface and cross-contaminate other foods. Most nonmicrobiological methods primarily assess residual organic surface debris, although some, such as ATP, may, by virtue of its ability to assess microbial ATP, also additionally detect microbial contamination and some ATP tests can typically detect as low as 10 4 CFU/mL of bacteria (but do not detect viruses or bacterial spores). Microbiological methods only provide an indication of the numbers of, general or specific, residual surface organisms and provide no indication of surface organic soiling (ie, if cleaning was effective). It should be realized (although for some reason it is often attempted) that in food environments there is little value in trying to directly correlate surface microbial counts to ATP readings. Any correlations achieved could be described as "coincidental" and for a strong correlation, the ratio between microbial ATP and food debris ATP would need to be constant. This is unlikely to occur in many food premises and sites, with the possible exception of some hand-contact surfaces . It is possible to have foods with a high ATP count and low microbial count (eg, UHT milk), thus a small increase in product residue can increase the surface ATP count but not microbial numbers. Similarly, depending on the food product, for example, raw foods, and their level of microbial contamination, it is possible to have a relatively low ATP increase with higher increases in microbial numbers. More recently ATP technology has been linked to the assessment of acid phosphatase, an enzyme found in RAW meat and poultry. After swabbing a surface and reacting for 2 or 5 min (depending on sensitivity required) light is emitted-the more light emitted the more acid phosphatase is present and therefore the less clean the surface. The enzyme is inactivated during cooking and surfaces used for cooked products should give low readings. A simple comparison of the different approaches is presented in Table 44 .5. While no single ideal method of testing exists by combining different methods as part of an integrated protocol (see Section 44.5) valuable information on cleaning and disinfection performance for validation, monitoring, and verification purposes can be obtained. The further aims of this chapter are to review in greater depth nonmicrobiological and microbiological methods for assessing cleaning and cleaning efficacy, in order to ensure appropriate and cost-effective cleaning and suggest ways to manage an integrated program of surface sampling. Visual assessment still has an important role to play in cleaning assessment but as part of an integrated assessment protocol (see Section 44.5). In isolation it is not a good method for assessing anything other than gross surface soil. However, most auditors will take a torch (flashlight) with them to inspect the visual cleanliness of dark/hidden, out-of-the-way places in food premises. More recently dye tests have been developed (invisible to the naked eye but visible under UV light) and are finding particular use in health care (Boyce et al., 2011) . These are essentially a test of cleaning method rather than routine or random testing for surface cleanliness. Their use in the food industry could be hampered by the possibility of dye residues getting into food, causing safety or organoleptic problems, although possibly they could find use in nonfood-contact areas. The dye has to be covertly applied to surfaces prior to cleaning and cleaning can be deemed acceptable if it has been removed. Simple visual assessment can be combined with magnification/microscopy, as well as touch, dust, or powder to detect grease or other residues. Another approach that has been used is to apply clear sticky tape to the surface to be tested (adhesive side in contact with the surface). After removal the tape can be placed on a clean microscope slide and examined under an ordinary light microscope and this approach has been used for determining surface mold contamination. Recent advances in microscopy have resulted in the development of surface observation methods, for microorganisms or biofilms, based on epifluorescent, confocal scanning laser, and episcopic differential interference contrast microscopy. These latter methods, while providing useful laboratory information, are impractical for routine use in food businesses. More recently a device for visually assessing surface cleanliness, based on detecting fluorescing chemicals, for example, chlorophyll residues in feces or meat, has become available. This can be of use in surface assessment in some foodprocessing areas. ATP is the universal energy currency, or donor, for metabolic processes, in all living cells. It is present in viable microorganisms (not viruses) and in most foods and their residues in variable amounts. The original, and still most common use of, ATP bioluminescence assays (more recent innovations combine microbial cultivation with bioluminogenic and ATP chemistry, see Section 44.4.2) work on the principle ( Fig. 44 .6) that ATP in food/food residues and microorganisms, in the presence of an enzyme/substrate complex, leads to light emission. The light is measured quantitatively in a luminometer (light-detecting instrument), with results available in 10-30 seconds. The amount of light emitted is therefore proportionate to the amount of ATP on a surface and hence its cleanliness. The level of ATP within cells varies depending upon the type of cell, for example, animal, yeast, bacteria, and its phase of growth, but the ATP pool in living cells is normally kept consistent by regulatory mechanisms . The enzyme-substrate complex luciferin-luciferase converts the chemical energy associated with the ATP into light in a stoichiometric reaction with one photon of light produced by the hydrolysis of one molecule of ATP. The light emitted is normally measured in relative light units (RLUs), calibrated for each make of instrument and set of reagents. Therefore the readings obtained from assessing routine cleaning need to be compared with baseline data representing acceptable clean values. These can be determined by cleaning well-designed and -maintained carefully cleaned surfaces, following structured cleaning schedules, using detergents and disinfectants at the correct concentrations and for the correct contact times, and using clean equipment. A range of luminometers and tests are available and major new developments in assays and equipment occur approximately every few years. Originally luminometers were large and only suitable for laboratory use. These have evolved over the years into small handheld models, which can be used anywhere within a plant and some can perform a range of additional tests. Some luminometers use a photomultiplier tube in the light detection system, other manufacturers use photo diode-based systems. Each has advantages and disadvantages, photodiode instruments are cheaper, more robust, with a lower background noise which should not vary much over time but their use could reduce overall test sensitivity. This can be overcome depending upon the chemistry of the reagents used which can be lyophilized or based on liquid-stable chemistry as well as their configuration or packaging. What is important is the overall performance of the instrument in conjunction with the test chemistry (linearity, sensitivity, repeatability, and accuracy) and various reports are available comparing different instruments (Kupski et al., 2010) as well as recommendations for selecting an instrument (Griffith, 2012) . Most newer instruments offer trend analysis software, which can store and then download data to a PC. This software is very useful for comparing data over time and from different sites and plants and can indicate areas frequently improperly cleaned, surfaces that are moving towards loss of control, and allows comparison between cleaning operatives. One manufacturer has added the ability to perform additional checks, for example, pH and temperature measurement, by adding additional test probes and facilities to the luminometer. This may be useful but can be problematic if one part develops a fault and, at the end of the day, it is how well the luminometer and the test designed for it actually perform that is the most important determinant of choice. Most manufacturers offer calibration and/or positive/negative controls to help ensure accuracy Simplifications in the assay test swabs have resulted in the ability for testing to be performed by nontechnical staff, with tubes and pipettes replaced by simple, single-shot, all-in-one assays. The exact chemical formulations used in the assays vary with suppliers, but typically contain luciferin/luciferase, magnesium ions, buffering, substrates, stabilizers, and extractants (to remove the ATP from living cells). They vary in shelf-life, depending on precise composition and the temperature and manner of their storage. ATP is found in many, but not necessarily all, foodstuffs. High counts can be found in some fresh foods, for example, tomatoes, while other foods, especially highly processed foods such as fats, oils, or sugar, contain very low amounts. Detergents/sanitizers used in cleaning can have a similar effect to the extractants used in the tests and different studies have demonstrated that commonly used cleaning chemicals can cause either quenching or enhancement of the ATP signal. It is therefore desirable, for consistency of results, to ensure that cleaning agents are removed by rinsing before testing is performed. The repeatability and reliability of the instruments and their tests can vary considerably among manufacturers, but is generally superior to microbiological swabbing. The sensitivity of the instruments and their tests is variable and higher-sensitivity instrument/tests combinations can detect down to 0.1 fmol of ATP. There have been discussions over exactly how sensitive ATP tests need to be and although there is a demand for a certain minimum sensitivity, if a test could be too sensitive is more debatable. The key requirement is that they should be able to discriminate "well cleaned" from "inadequately cleaned" surfaces, which are important or relevant to a business. So, for example, requirements for surfaces in an aseptic fill product would be quite different from those in drains. While test manufacturers will provide guidance on clean benchmark levels, they are usually best determined on-site by the food business and then used as the basis for continuous improvement. ATP has also been adapted by some test manufacturers for the detection of allergen residues and it is claimed that some tests detect down to 0.1-5 ppm of allergen food residues. Following the development and application of ATP bioluminescence as a measure of cleanliness, other chemical assays/ tests for food-residue components have been investigated. The stimulus is to develop a noninstrument-dependent test, that is cheap and functional. A range of other chemical residues including protein, reducing sugars, and nicotinamide adenine dinucleotide (NAD), are available as the basis for rapid cleaning tests. Usually the tests lead to the production of a single, or sequence of, colored end-products within a specified time (1-10 min). The color changes can be qualitatively assessed visually. This can be subjective and the option to use a cheap sample instrument to measure and/or record the results is available for some tests, if needed. The subjectivity is most variable for marginally unclean surfaces, the clean or very dirty being less subjective. Some tests retain a swab-based format, while others use test strips impregnated with relevant reagents. Which, if any, of these tests will be of benefit to a food business will depend on a number of factors (Table 44 .6), not least of which is the sensitivity of the assay. Such tests, if cheaper and instrument-independent, may find potential use in food service establishments. Often criticized for poor cleanliness, they are the reported location for most outbreaks of food poisoning (Griffith, 2000) . Of the non-ATP assays, protein detection methods offer potential where the food residues, for example, poultry/meat/ dairy products, are high in protein and also offer particular use in detecting allergens (Easter, 2012) . Although not a specific allergen test, most important food allergens are proteinaceous in nature. In some of the assays, other food nonprotein, reducing components may also bring about a color change. Some methods make use of an enhanced Buiret reaction. Under alkaline conditions the peptide bonds of proteins form a complex with the copper II (Cu 2+ ) of the Buiret reagent, reducing it to Copper I (Cu + ) ions. These react with bicinchoninic acid, producing an intense purple color. Other protein tests make use of different so-called "protein error indicator" dyes (eg, tetrabromophenol blue), which change color in the presence of protein at a particular pH. These tests may be swab-based, although some versions use test strips or pads. Depending upon the food examined, protein tests may be more or less sensitive than ATP bioluminescence (Moore and Griffith, 2002c) . Detection levels between 1 and 10 µg of protein are possible depending on the test and if an incubation step is used. The intensity of the color and its speed of production provide an indication of the level of soiling, although results are usually just pass/fail. NAD and related forms are chemical residues, which are also widely distributed in biological materials, including foods and microorganisms. Hence, the level of NAD on a surface provides a measure of organic soiling. NAD is detected in a chemical reaction leading to the production of a pink/purple color on a test strip, within 5 min. As with the other chemical residue detection kits, lack of a positive reaction does not represent lack of microorganisms. The usefulness of the test needs to be trialed and this will depend upon the type of foods produced and the level of NAD they contain. Other swab-based tests can be used to detect either glucose or glucose and lactose at levels down to 2.5 µmol of glucose or 5.0 µmol of lactose. Glucose may be present in up to 85% of food residues, while lactose determination is of practical benefit to the dairy industry. In most cases, the test is likely to be less sensitive than the equivalent ATP assay but it is claimed that for many food residues, it is nearly as good and is rapid and noninstrument-based. As with all the rapid chemical tests, no conclusion regarding the absence of microorganisms can be inferred from a negative test. The market for rapid test methods is likely to increase although it is probably fair to say the ideal test method does not yet exist and their use needs to be considered in relation to the type of business and the food produced and the use of an integrated test protocol. Microbiological surface sampling cannot be described as new, with reports of its use going back to the 1920s and 1930s (Saelhof and Heinekamp, 1920; Krogg and Dougherty, 1936) , although precise methodological details are lacking. However, most of this early work was based on swabbing, with direct agar contact methods only developed later, although the future is likely to see greater use of molecular methods. The main microbiological methods in use within the food industry include the use of swabs, sponges, or wipes to recover organisms from the surface followed by their cultivation on/in nutrient media (effectively indirect). The rationale for such testing can be either to semiquantitatively estimate the residual number of general or indicator organisms present, that is, to provide evidence of cleaning efficacy. Indicator organisms can reflect surface microbiological quality and whether conditions may permit the presence/growth of more specific pathogens. Often the latter may be like looking for a "needle in a haystack," but is of particular benefit if: • A specific pathogen has been found in a food sample • Investigating cases of food poisoning • Part of a specific pathogen control program, for example, controlling Listeria in food premises. Testing surfaces for the presence of pathogens, for example, L. monocytogenes, which could get into the food and cause problems, is a fundamentally different philosophical approach and is used to indicate risk. In this latter case it is usually a qualitative value that is needed rather than a semiquantitative one, that is, is a specific pathogen present. In testing for pathogens it is usual to test a larger surface area, for example, 1000 cm 2 rather than the more conventional 100 cm 2 (Willes et al., 2013) . The medium inoculated by the swab used can be solid, semisolid, or liquid. In the former colonies counted on the surface are assumed to originate from one organism and this can contribute to variability. When wanting a count to reflect cleanliness for comparison purposes as part of a routine testing program a specific area (often 100 cm 2 ) should be swabbed. If looking for the presence of a pathogen a large surface area should be tested. The requirement being-is a pathogen present or not? For such purposes large sponges (with or without a handle) are usually superior to swabs. Crucial in any microbiological surface testing is the recovery efficiency (RE) (Trafny et al., 2014) and this can vary by method, the number and types of microorganisms, and with the nature of the surface. Methods where the nutrient medium is in direct contact with the surface tested (contact plates and dipslides) are easier to use and could theoretically give superior recovery. How the comparison trials are set up can influence the results but in two large-scale comparisons (Salo et al., 2000 (Salo et al., , 2002 contact methods did give superior results, although the differences were not always significant. A problem with all cultivation methods is to remove the organisms from the surface in order to cultivate them. This has led to "rinsing" the surface to be tested (the rinse fluid is now used as the source of microorganisms), and is widely used where access to the surface can be difficult, for example, in CIP systems. More recently efforts to remove surface microorganisms, especially in biofilms, by sonication have been tried (Ismail et al., 2013) . Apart from practical problems it raises questions about the importance and validity of the numbers recovered in relation to product contamination. The choice of microbial method will depend on the precise information required and the prevailing circumstances (Table 44 .7). A problem with cultivation methods can occur due to viable microorganisms going undetected due to stress, giving rise to viable but nonculturable (VBNC) bacteria. However, such organisms may or may not be able to cause spoilage or be infective/retain their pathogenicity. Nevertheless a positive result indicates that the surface has a history of previous contamination and could present a risk Swabbing in one form or another remains the oldest and probably the most widely used method for "surface monitoring" Griffith, 2002a, 2007) . It should be noted that although the term monitoring is widely used this does often not conform to the HACCP definition. For the latter, results must be obtained in time for corrective action to be taken and swabbing, like impression plates, relies on cultivation, which, depending on the organism can take hours, days, or weeks (eg, for TB). Most swabbing protocols are based upon the swab-rinse technique originally developed by Manheimer and Yheunez in 1917 (Favero et al., 1968) . A sterile swab, consisting of a more or less flexible shaft with a fibrous bud or tip, is premoistened in an appropriate wetting agent and inoculated by rubbing over the surface to be tested. The microorganisms transferred to the swab can then be cultivated and counted, either by inoculating the swab directly onto an appropriate solid culture medium or by releasing captured microorganisms into a known quantity of sterile recovery diluent, which is then used to prepare pour plates. This description of swabbing also indicates some of the variability in the technique (Moore and Griffith, 2007; Ismail et al., 2013; Downey et al., 2012) , which can considerably affect the apparent number of organisms recovered Griffith, 2002a, 2007) . If the number of microorganisms on a surface is known (as in laboratory conditions), and compared with the number obtained from swabbing, there is low recovery particularly at low surface population densities below 10 4 cells per cm 2 (Holah et al., 1988) . Additionally the swabbing technique lacks reliability, that is, repeatability and reproducibility are poor Griffith, 2002a,b, 2007; Moore et al., 2001) . Various "standard" methods are available, including ISO 18593:2004, although currently there is no universally accepted method of swabbing. Some of the possible variables are indicated in Table 44 .8. Swabbing is widely used in industry to assess surface contamination, although not for larger surface areas (Ismail et al., 2013) and as a reference for comparison with other methods. However, basic information is still lacking as to the optimum protocol and the effect that variations may have on recovery rates (Moore and Griffith, 2007) . Overall recovery can be seen as a function of the removal of microorganisms from the test surface, their release from the swab and their subsequent ability to grow. Recovery rates can vary from 0.1% to 25% (Moore and Griffith, 2007) and will depend on the technique used but an optimum recovery rate of 10% for Dacron swabs is not uncommon. The type and number of microorganisms sampled can have a major effect on recovery (Rose et al., 2011; Downey et al., 2012) and they can become increasingly difficult to remove once they have adhered to a surface (Cunliffe et al., 1999) , particularly in biofilms. Additionally, organism retention within the bud fibers results in poor repeatability and sensitivity. Techniques/variables that improve one element of the swabbing process may adversely affect another. One study (Moore and Griffith, 2002a) showed protocols that improved removal, adversely affected release. Optimum overall recovery may therefore be a trade-off or compromise between different components of the whole process. The lack of repeatability can make it difficult to interpret the results from a single environmental swab, especially between staff, from different plants and when different protocols are used. An apparent low surface count from a single swab may reflect swabbing technique as much as low contamination levels. This could lead to a false impression of cleaning efficacy and whether guidelines or company specifications have been achieved. Swabbing, as with other surface assessment techniques, is best used to establish trends in the performance of the cleaning and disinfection program using multiple test results when over a period of time, the program can be seen to be failing or improving. The food manufacturers' view is that the variability in swabbing per se, especially if standardized (Moore and Griffith, 2007) , is not sufficient to prevent the detection of high surface counts on a given day, that is, the results of a badly implemented cleaning operation. Understanding the problems associated with overall recovery rates can help to improve and control the process (Moore and Griffith, 2007) . Sampling/wetting solutions, designed to maintain isotonic conditions and reduce physiological stress, can be used to maintain the viability of microorganisms recovered from surfaces (Campden BRI, 2003) . Care needs to be taken in their selection to ensure they do not artificially increase the count by providing a medium in which recovered microorganisms can grow during transit (Moore and Griffith, 2007) . Some surfaces may still have residual disinfectant present and neutralizing agents, appropriate for the disinfectant being used, can be added to the wetting solution. These help to prevent organisms, removed from the surface (where they may be more resistant), being killed by residual sanitizer and thereby giving an "artificially" reduced count. Ideally swabs should be processed as soon as possible, although this is often impractical, especially when outside laboratories are used. Under such conditions, samples should be transported nonfrozen at a low temperature (<5°C), this can result in minimal differences compared with real-time analysis (Campden BRI, 2003) . Times of sampling and processing need to be recorded, as well as delivery temperature, so that any unusual results or significant differences from the norm can be identified and considered when interpreting the results. Variables of time and wetting agent also need to be considered and optimized in sampling for specific pathogens. Appropriate pre-enrichment media should be used, although overgrowth by more rapidly growing nonpathogens needs to be considered. Some manufacturers may add a surfactant to their wetting solutions to improve "pick-up" from the test surface. These can, in some cases, artificially increase the number of colonies counted by breaking up clumps of organisms and thereby increasing the number of "colony-forming units." Concerns over the inability of swab buds to release recovered organisms have prompted one manufacturer to develop a radically new type of swab (Moore and Griffith, 2007) . This lacks the normal fibrous bud, which is replaced with short textured flocked nylon in spatula or swab format. This device releases more of the organisms removed from a surface and can yield an approximate 1 log improved overall recovery compared with traditional swabs. An alternative approach has led to the development of a wet or dry vacuum bacterial collection system. This may be of particular use in pathogen testing as it allows a much larger surface to be assessed without the need/use of a swab to lift/remove the organism from the surface being tested. Another variation involves self-contained "all-in-one media and hygiene swab" in a tube with the potential to offer more rapid results (Moore and Griffith, 2002b) . A swab, after testing a surface, is returned to its accompanying culture tube containing a liquid or semisolid agar incorporating an indicator system. Microorganisms removed from the surface and retained by the swab grow and, as they multiply their growth can be detected, for example, by a color change. The results are semiquantitative in that the number of bacteria is not recorded but the time taken for the indicator to change color is a measure of the original microbial load. Unclean surfaces, depending upon the extent of microbial contamination, can test positive within 12 hours. When nonspecific media are used a general aerobic colony count is obtained, alternatively a selective or enrichment medium is used to test for the indicator organism or pathogens. Indicator systems can be based on chromogenic, fluorogenic, or bioluminogenic detection principles. In chromogenic assays the medium changes color as a consequence of microbial metabolism, and depending on the test indicates either the presence or absence of pathogen/group of organisms or the approximate degree of microbial surface contamination. Traditionally this could detect relevant organisms within as short a time as 18 hours (depending on the organism being tested for). More recently this time has been reduced by combining cultivation with a bioluminogenic test using a luminometer which can considerably reduce the detection time . This offers, depending on shift patterns, an opportunity for corrective action to be taken before further food production takes place. Using this approach, which correlates well with traditional counts, the time taken for detection extends from 1 hour, if the surface is heavily contaminated, to 8 hours for lightly contaminated surfaces. Such bioluminogenic tests are available for coliforms, Enterobacteriaceae, E. coli, and Listeria. Being able to perform a combined microbial cultivation and ATP assay extends the usefulness of luminometers beyond the conventional approach for estimating ATP in surface residues. Sponges work on a similar principle to swabbing, in that microorganisms are removed, released, and cultivated. Recovery is by wiping a compressed sterile sponge (eg, cellulose acetate) of varying sizes over the test surface. Some have no swab shaft, and in order to avoid contamination, the sponge needs to be held using a sterile glove, usually provided with the sponge. Sponges may be premoistened or require the addition of a wetting agent. After inoculation the sponge is returned to a sterile envelope/packet and transported to a laboratory. After the addition of a suitable diluent to the envelope, usually followed by agitation/stomaching, the released organisms can be counted. Similar errors to those encountered in swabbing may occur, and there is some evidence to suggest that the sponge matrix retains even more of the recovered organisms than swabbing, resulting in lower overall recovery (Moore and Griffith, 2002b) . However sponges if returned to an enrichment medium for pathogen detection offer superior sensitivity and are not affected by the microorganisms being attached to the sponge matrix. Any organisms in the sponge go on to grow and multiply and contribute to a positive result. Some sponges also offer the advantage of greater surface area: being much bigger than conventional swabs they allow larger surface areas to be tested, and may therefore be more useful in testing surfaces for pathogens. Greater pressure can also be applied than with swabs. Other variations include sponges on sticks, and in France, the use of gauze to swab surfaces. Recently research has indicated that electrostatic wipes offered a better overall performance than swabs (Lutz et al., 2013) , however validation data on the effectiveness of some of these alternatives under a wide range of conditions and organisms are not widely available. All direct agar contact methods, or replicate organism direct area contact (RODAC), involve pressing sterile agar onto a surface to be sampled. For this reason they are sometimes called "printing methods" (Ismail et al., 2013) . A contact time of 10 seconds with a force of 25 g/cm 2 , without lateral movement, is suggested (ISO 14698: 2004) . Microorganisms are directly transferred onto the agar surface and, after incubation for an appropriate length of time, multiply and form colonies, which are visible and can be counted. In general this approach is best suited to smooth, flat surfaces. The methods vary in how the agar is dispersed. Contact plates resemble small plastic Petri dishes with a lid. The agar is poured into them, leaving a convex contact surface. After removing the lid the agar is pressed onto the test surface. The contact plates are then incubated and examined 24-48 hours later. Agar immersion, plating, and contact (AIPC) slides, more commonly referred to as dipslides or paddles (in the United States), were developed from "dip spoons" used in counting the numbers of organisms in urine samples. They comprise a double-sided hinged paddle with a neutral or selective agar, attached to both sides. The paddle is contained within a transparent cylindrical tube or plastic container. The dipslide is removed, then pressed onto the surface to be tested, replaced back into the tube and resulting colony growth counted, or compared with pictorial estimates/diagrams of surface counts. They can also be used for counting the number of organisms in liquid samples of food, water, or rinse water. Recently a flexible hybrid contact plate/dipslide, to test more irregular-shaped surfaces, has become available. Other variations include the use of petrifilm to replace traditional agar plates for cultivation. These are small, thin films coated with nutrients and gelling agents. After wetting the film with approximately 1 mL of deionized water to rehydrate the growth medium, it can be used to provide a surface count. More recently a novel roller sampler was found to give a higher yield than conventional contact plates (Lutz et al., 2013) . Direct agar contact methods have a number of advantages and disadvantages compared with traditional swabbing. Advantages include ease of use, generally lower costs, and better recovery and repeatability (Salo et al., 2000 (Salo et al., , 2002 Moore et al., 2001; Moore and Griffith, 2002b) . Disadvantages include being more suited to flat surfaces and on very contaminated surfaces overgrowth can occur and this can make any statistical analysis of the results more problematic. However, this is not problematic if only an indication of cleaning adequacy, that is, pass or fail is required, rather than the precise number of organisms. It is easy to count the individual colonies, obtained from marginally unclean/clean surfaces, based on clean surface counts currently considered attainable (see Section 44.5.1). If a more precise number of colonies from a heavily contaminated surface is required then agar contact methods may be inappropriate. Sampling methods discussed so far have included assessment of chemicals such as ATP or protein (primarily for cleaning) although these chemicals are also found in cells/debris of nonmicrobial origin as well as more specific cultivation of microbial cells. A range of molecular methods is now available for the detection of groups, strains, or even specific subtypes of microorganisms including pathogens. Cultivation methods still require time, usually one or more days (although as can be seen this is being reduced) and molecular methods offer faster speed (although hours as opposed to seconds), greater sensitivity, and specificity. Often based on either DNA or RNA these methods target and amplify specific sections of a microorganism's nucleic acid to a detectable level. Studies utilizing molecular methods have revealed the diversity found on some surfaces and have resulted in the characterization of the entire microbial community of an environmental sample (NIST, 2012) . Techniques include polymerase chain reaction (PCR), reverse transcriptase (RT-PCR), and nucleic acid sequencebased amplification (NASBA). In real-time PCR the processes of amplification and detection are simultaneous. One potential disadvantage is that such techniques often do not distinguish between living microorganisms and noninfective nucleic acid and therefore only indicate that at some stage the organism was present on that surface-although they may be capable of detecting VBNC bacteria. At present, molecular methods require more technical expertise and high-cost equipment, are more expensive and are primarily used in outbreak investigations or to trace/track microorganisms within plants. However, with protocol advances it is likely in the future they will be used more routinely in assessing the effectiveness of primarily disinfection or in estimating risk. Ideally risk assessment also requires knowledge of the number of organisms and some molecular techniques can be made quantitative, for example, quantitative real-time PCR (qPCR). One laboratory study (Buttner et al., 2007) compared conventional cultivation with qPCR on a range of surfaces, although only one organism was tested. Cultivation techniques yielded few viable cells, whereas qPCR gave much higher results but represented nucleic acid from viable and nonviable cells. Depending on the analysis method used, sample pretreatment may be necessary, which can add to costs and lengthen the time taken. When prosecutions for dirty premises/equipment, particularly in the food service sector, take place they are usually based on visual assessment. There are no legal standards for cleanliness other than visual, although a range of guidelines has been proposed. These vary (Table 44 .9) and often their derivation is unclear but are usually based upon an often erroneous perception of risk or what is acceptable. Due to lack of data for risk assessment an alternative strategy is to decide what is attainable after correct implementation of a well-designed cleaning program. However, variability can undermine confidence in sampling results (Downey et al., 2012) . <500 RLUs Applies to use of one specific ATP test/equipment combination Target Mossel, D.A.A., Jansen, J.T., Struijk, C.B., 1999 . Microbiological safety assurance applied to smaller catering operations world-wide. Food Control 10, 195-211. f Swedish Food Agency, 1998. The Swedish Statute Book. SLV SFS, 10. Sources of variation need to be controlled (Moore and Griffith, 2007) and variability needs to be considered in setting guidelines and within the context of risk. Validation work (ie, when a cleaning protocol is first formulated) should involve extensive sampling and be considered within the framework of a statistically based trend analysis approach. One approach, used for ATP, is to determine the mean value (n = minimum of 10) considered as clean, for a surface and apply a fail value of the mean plus 3 times the standard deviation. Values between the mean and fail levels can be considered as cautionary or marginally acceptable. Studies on a wide range of over 10,000 surfaces Moore and Griffith, 2007; Lewis et al., 2008; Redmond et al., 2009) indicate that in most cases, levels of <2.5 CFU/cm 2 for a general surface count are attainable. This is relatively close to some suggested guidelines, although up to 10 CFU/cm 2 is considered acceptable by some (Table 44 .9; NSW Food Authority, 2012). Failure to achieve this level of cleanliness or disinfection may mean the cleaning protocol is poorly constructed, is not implemented well, unclean materials are used, or the surface is in poor condition/cannot be satisfactorily cleaned. Similar guidelines, using ATP bioluminescence, have also been proposed. However, any ATP guidelines should always be considered in relation to risk, possible soil types, and instrument/test combination. Crucial to all guidelines is agreement on consistent/approved sampling methods. It must be recognized that developing a monitoring protocol or strategy is pointless if cleaning itself is poorly implemented and managed. No amount of testing in itself will directly achieve cleaner surfaces, but its value is in indirectly informing producers about the efficacy of the cleaning systems in use. In recognition that no single ideal test exists, the combining of test methods into a coherent protocol, relevant to a business, as part of a consistent approach to plant sanitation is recommended (Figs. 44.7 and 44.8) . The extent, structure, and use of such protocols are likely to be dependent on the plant, the cleaning methods used, the level of risk associated with the product and the sophistication of the quality systems in use. An integrated protocol should recognize the type of information provided by, or the weakness FIGURE 44.7 Stages in an integrated cleaning monitoring program-no microbiological facilities (food service, retail, small processor). associated with, one test and to then use another to complement it or make good its deficiencies in relation to the information required. This approach has been previously recommended and incorporates corrective actions . The starting point for any protocol should be visual assessment. This is quick and cheap-if a surface is visually dirty then there is likely to be little point in any further testing, additional notes should be made of visual moisture and surface condition/wear. However, in isolation, visual assessment is not a good indicator of surface cleanliness and, therefore, any decision about further testing needs to be considered in relation to product risk as well as the availability of other tests and the types of food soil. One or more types of rapid testing, for example, ATP, can be combined with microbiological methods to determine the effectiveness of surface cleaning and disinfection. Additionally, both microbiological and nonmicrobiological tests have value in validating the original cleaning program and investigating the reasons for any failure to clean effectively (Table 44 .10). Rapid tests can be used after cleaning, prior to disinfection, to determine whether the surfaces are sufficiently free of soiling to enable successful disinfection. They can also help to identify areas difficult to clean or routinely poorly cleaned, thus indicating where microbiological testing is most useful. This type of integrated approach provides a better indication of cleaning efficacy, helps to provide transparency, and demonstrates a company's concern for effective cleaning. Additionally, it has the potential to save on cleaning costs by identifying what is, or is not, effective or necessary. Any overall policy to ensure clean surfaces should include monitoring surface cleanliness. When and where to monitor (Table 44 .11) needs to be considered in relation to surface type, risk, and the potential for cross-contamination, the type of information required and the reasons for sampling. Firms should construct an appropriate environmental sampling plan, this should specify the approaches, methods, numbers, and types of sample for each location and a results communication strategy. This helps to provide a consistent approach to routine sampling but should also give flexibility and allow for investigative or nonroutine sampling based on observations of poor cleaning practices/results, product problems, or the need to adopt a "seek and destroy" approach (Butts, 2003; Malley et al., 2015) . It is important that the results of any testing are communicated to the correct people. Poor/noncommunication of results was partly responsible for at least one major outbreak of listeriosis, where positive L. monocytogenes isolations were not reported to senior management (Weatherill, 2009 ). It is unfortunate that in some companies, sampling concentrates on the center of large flat surfaces. Easier-to-sample surfaces are usually easier to clean and so are clean, hard-to-sample surfaces are often more difficult to clean and may be less well cleaned or overlooked. Less attention may be given to hand-contact surfaces or cracks and crevices where soil and later microorganisms can accumulate. Hand-contact surfaces in particular are known to be often heavily contaminated and frequently touched prior to handling RTE foods (Clayton and Griffith, 2004; Griffith, 2013) . Rinses, especially in CIP, in liquid-processing plants can be tested to provide an indirect estimate of surface cleanliness, as can the quality of the first product run after, for example, a weekend shut-down. One approach is to designate surfaces as food contact, general environmental, hand contact, and cleaning (equipment/ cloths). The latter need care and attention and can act as vectors causing the zig-zag spread of pathogens within an environment (Harrison et al., 2003) . Another approach uses a "criticality index" of six levels (Microgen, 2015) , whereby the frequency of monitoring is assigned to an area depending on how critical it is, for example, final product area subjected to greater testing than a raw material area. Or areas subjected to warmer wetter conditions would constitute a higher risk than colder dryer areas. This type of approach would need careful thought if testing for Listeria or Listeria monocytogenes. For high hygiene areas, particularly for the manufacture of RTE foods, and for which final product sampling for pathogens is frequent, a three-stage sampling program is proposed. Barriers to the high hygiene area (eg, personnel entry areas, product decontamination entry tunnels, packaging entrances) are sampled for pathogens during production. If pathogens are found, the control of the barriers is checked. As an indication of whether pathogens are present during manufacturing periods, pathogen "collector points" (eg, cleaning equipment, drains, footwear, vehicle wheels) are sampled during production. If these are negative, there is some confidence that the production area is free of pathogens. If positive, extended sampling can then be undertaken to elucidate the pathogen source. Finally, cleaned equipment is sampled for pathogens to verify cleaning and disinfectant performance. A third option (ICMSF, 2002) , a variation of the high risk/low risk which has been developed in the United States where final product testing for pathogens may be less frequent, is to arrange areas into zones or shells (Fig. 44.9 ). This essentially establishes successively "cleaner zones" and/or zones of increased sampling frequency and decreasing levels of contamination. Zone 1 represents the most critical areas of cleaning-mainly surfaces in contact with RTE products, for example, conveyor belts and cutters. Filling and depositing heads, spray drying, or cream depositers can be particularly difficult to clean effectively. Zone 2 could include hand-contact areas in close proximity to zone 1 and may even include the surfaces used/touched during hand-washing . Zone 2 would also include environmental areas in close proximity to Zone 1. Organization of areas, based on risk, to determine sampling frequency and stringency. The precise allocation of areas into zones will, to some extent, be product-and plant-specific and the figure is indicative only. Microorganisms can easily be spread in food premises and molecular subtyping has shown that pathogens can persist for years in the environment, even after so-called deep cleaning to eradicate them. The latter may be good locations for the survival of organisms such as Listeria. Any Listeria control strategy should concentrate on eradication of Listeria from Zone 2 sites first, before consideration of Zone 1. Failure to do so is only likely to lead to rapid recontamination of Zone 1 surfaces. Zone 3 includes floors, walls, etc. in areas more distant from Zone 1, and comprises the least-critical food-handling areas, where sampling frequency may be at its lowest and environmental contamination at its highest, for example, where raw products are received. This is relative, that is, in relation to Zone 1, and is not an excuse for poor cleaning, or not testing. It is a recognition, based on risk, that less stringent sampling is needed. However, all three zones need to be considered in terms of product flow and people movement. Depending on where and how it is used, the degree of separation of high from low risk and its potential to spread bacteria, cleaning equipment could be considered as Zone 2 or 3. The use of contaminated cleaning equipment is one of the main reasons for failure to clean effectively and can spread pathogens from low-to high-risk areas. Ideally each area and zone should have hygienically designed color-coded equipment, which should not be used in other areas. Also essential is to store equipment correctly, that is, clean and dry, or if used on a semicontinuous basis, frequently cleaned and stored in fresh disinfectant solution monitored for concentration levels. Care should also be taken, especially in Listeria control programs, with shoes/boots, tracks, etc.these need to be cleaned properly as they can spread organisms around premises (Tompkin et al., 1999) . This type of framework fits into the increasing use of cleaning and cross-contamination audits. Cleaning audits (internal or external) should be conducted independently and assess both the quality and adequacy of the cleaning program and the level of compliance with it. Personal digital assistants (PDA), palm-held auditing tools with appropriate software, or smartphones are available to capture data electronically. They simplify the whole process and can incorporate data from microbiological or rapid testing. One advantage of capturing data electronically is that draft reports can be produced, if necessary, before the auditor leaves the areas/premises being audited. Data in an electronic form are far more useful and usable than those in paper reports. Other advantages include greater consistency, overall time savings, and greater usability of data for analyzing trends and designing corrective strategies. These become even more powerful if combined with cross-contamination audits, which are broader in scope, and used to assess the overall risk or potential for crosscontamination. The latter assess more than just cleaning, and include personal hygiene, facilities available, for example, hand-washing and drying, traffic, and personnel flow. Monitoring surface cleanliness is not without costs but these need to be considered in relation to the costs associated with failing to monitor (Fig. 44.10) . Given the cost of cleaning and the expenditure in time, effort, and money that some companies put into surface sampling it is surprising that more use is often not made of the results. It has been said "if it wasn't recorded it did not happen" and documented results from microbiological and nonmicrobiological sampling can be used to validate cleaning and provide ongoing data for trend analysis (a requirement of standards such as BRC). This can be a powerful management tool and be used as part of a statistical process control approach. It can also identify areas where cleaning is often/ regularly poorly performed, staff who are not cleaning appropriately, effectiveness of changes in cleaning practices, and when cleaning is starting to go/has gone "out of control." Monitoring should therefore be based on regular consistent testing coupled with trend analysis and requires that all results are considered (not just the ones considered unacceptable) as good results can also be informative. Results to look for are outliers-Is this a test anomaly or can some other event explain the difference? Is there a consistent change in results-Has a new cleaning chemical or methods been used? Are higher results being obtained at certain intervals, for example, different shift? Are trends showing a drift toward going out of control? The results of environmental surface testing can also be linked to food end-product counts, staff rotas, shift patterns, etc. Cumulatively this can help to maximize cleaning and ensure monitoring is as cost-effective as possible-maximum effectiveness for minimum cost. Cleaning costs money and cleaning needs to deliver, otherwise it is a waste of money and time. Monitoring cleaning efficacy should therefore be part of the work of any food business and if undertaken appropriately can be cost-effective, providing greater confidence in food safety and superior product shelf-life, and in most food manufacturing is likely to increasingly involve an integrated testing approach. In spite of the introduction of HACCP the need for testing is still an important function, with Czarneski et al. (2012) stating that establishing and maintaining a comprehensive environmental monitoring program was critical in the food industry. The market for rapid testing in the food industry is predicted to further increase especially as they become faster, better, and cheaper (Weschler, 2011) with increasing use of rapid methods at the expense of more traditional ones. Factors influencing this include allergen concerns, the increased importance of cross-contamination, the cost of recalls due to contaminated product, as well as other factors. This has coincided with the emergence of a number of pathogens with a low infectious dose. Therefore the need for surface testing as a means to assess cleaning efficiency has increased and will further increase due to its proactive potential to ensure food does not become contaminated in the first place (as opposed to end-product testing, which tells you something may or may not have happened). Tests for surface cleanliness will evolve to meet changing requirements in how they are used, how they can be integrated into an holistic approach, considered in relation to developments in legislation, reference values, audit standards, and other needs. An always important factor is cost, especially as the extent of testing needs to be reviewed in relation to the benefits that can be achieved. Such analyses (Fig. 44 .10) should consider the failure costs, that is, costs associated with poor cleaning as well as the costs of testing. Cleaning should deliver value, that is, clean surfaces in relation to cost and risk. For food service, which currently does little testing, the future is likely to be the introduction of very easy to use, lowcost, noninstrument tests. Changes in test methods are likely to be driven by versatility, speed, specificity, sensitivity, and cost, along with more sophisticated foolproof software. More innovative approaches are likely in the design of flexible agar contact systems, suitable for use on irregular-shaped surfaces. More rapid microbiological tests will be developed. This may be in isolation or in combination with tests for the presence of specific pathogens. The ability to detect lower levels of ATP has developed over recent years with, depending upon the reagents and how they are produced, the ability to detect below 1 fmol of ATP. ATP or microbiological tests, specific and sensitive enough to detect very low levels of bacteria or ATP, even in dry conditions, could be developed. New formats, other than swabs, may be devised. Currently, no rapid test is wellsuited for testing surfaces that are high in fats, and tests specific for fats and oils would be useful for some processors. Molecular techniques are likely to be become even more specific, lower in cost, more rapid, and more widely used. The effects of ozone and "Open Air Factor" against aerosolized Micrococcus luteus Food quality check program. Microbiological recommendations and sampling schedule-2014. British Columbia Centre for Disease Control Terminal decontamination of patient rooms using an automated mobile UV light unit The British Retail Consortium Evaluation of two surface sampling methods for detection of Erwinia herbicola on a variety of materials by culture and quantitative PCR Seek & destroy: identifying and controlling Listeria monocytogenes growth niches. Food Safety Magazine Environmental monitoring and decontamination of food processing facilities The use of notational analysis to observe the implementation of specific food safety practices in catering Cleaning and handling implements as potential reservoirs for bacteria contamination of some readyto-eat foods in retail delicatessen environments Bacterial adhesion at synthetic surfaces Hygiene: how myths, monsters and mothers-in-law can promote behaviour change ATP bioluminescence and the validation and monitoring of cleaning programmes How to Clean: A Management Guide Impact of processing method on recovery of bacteria from wipes used in biological surface sampling Breaking new boundaries: simple rapid multiple test system A simple bioluminogenic detection method for the rapid detection of bacteria in foods in 4-7 hours Microbiological sampling of surfaces Effectiveness of cleaning techniques used in the food industry in terms of the removal of bacterial biofilms Influence of growth rates on susceptibility to antimicrobial agents: biofilms, cell cycle and dormancy Food safety in catering establishments What makes a good ATP hygiene monitoring system? Advances in understanding the impact of personal hygiene and human behaviour on food safety Developing and maintaining a positive food safety culture Handling poultry and eggs in the kitchen Towards a strategic cleaning assessment programme: hygiene monitoring and ATP luminometry, an option appraisal An evaluation of hospital cleaning regimes and standards Environmental surface cleanliness and the potential for contamination during handwashing Bacterial transfer rates and cross-contamination potential associated with paper towel dispensing The use of direct epiflourescent microscopy (DEM) and the direct epifluorescent filter technique (DEFT) to assess microbial populations on food contact surfaces Salmonella enteritidis phage type 4 isolates more tolerant of heat, acid or hydrogen peroxide also survives longer on surfaces Microorganisms in Food 7: Microbiological Testing in Food Safety Management Achieving continuous improvement in reductions in foodborne listeriosis-a risk-based approach Methods for recovering microorganisms from solid surfaces used in the food industry: a review of the literature Molecular methods for the assessment of bacterial viability Microbiological sampling in the dry foods processing environment Effectiveness of the methods of dish and utensil washing in public eating and drinking establishments Performance evaluation of various ATP detecting units. Silliker Food Science Center A modified ATP benchmark for evaluating the cleaning of some Hospital Environmental Surfaces Comparative performance of contact plate, electrostatic wipes, swabs and novel sampling device for the detection of Staphylococcus aureus on environmental surfaces Seek and destroy process: Listeria monocytogenes process controls in the ready-to-eat meat and poultry industry A Guide to Environmental Microbiological Testing for the Food Industry Factors influencing the recovery of microorganisms from surfaces using traditional hygiene swabbing A comparison of surface sampling methods for detecting coliforms on food contact surfaces A comparison of traditional and recently developed methods for monitoring surface hygiene: an industry trial Problems associated with traditional hygiene swabbing: the need for in-house standardization Bactericidal properties of ozone A comparison of traditional and recently developed methods for monitoring surface hygiene within the food industry: a laboratory study Challenges in microbial sampling in the indoor environment: workshop report summary Environmental Swabbing. A Guide to Method Selection and Consistent Technique Contamination of bottles used for feeding reconstituted powdered infant formula and implications for public health Microbiological and observational analysis of cross-contamination risks during domestic food preparation National validation study of a cellulose sponge wipe-processing method for use after sampling Bacillus anthracis spores from surfaces Recovery of Streptococcus hemolyticus from restaurant tableware A study of cleaning standards and practices in food premises in the United Kingdom Validation of the microbiological methods hygicult dipslide, contact plate, and swabbing in surface hygiene control: a Nordic collaborative study Validation of the Hygicult E dipslides method in surface hygiene control: A Nordic collaborative study Control of Listeria monocytogenes in the food processing environment Guidelines to prevent post-processing contamination from Listeria monocytogenes Biological threat detection in the air and on the surface: how to define the risk Examining food, water and environmental samples from healthcare environments An assessment of cleaning regimes and standards in Butchers' shops