key: cord-0710562-opsgtymp authors: Blakebrough-Hall, Claudia; Hick, Paul; González, Luciano A title: Predicting bovine respiratory disease outcome in feedlot cattle using latent class analysis date: 2020-11-28 journal: J Anim Sci DOI: 10.1093/jas/skaa381 sha: 4c63ca08a8cdf9c8c5ad08977c815a5e5bd380cd doc_id: 710562 cord_uid: opsgtymp Bovine respiratory disease (BRD) is the most significant disease affecting feedlot cattle. Indicators of BRD often used in feedlots such as visual signs, rectal temperature, computer-assisted lung auscultation (CALA) score, the number of BRD treatments, presence of viral pathogens, viral seroconversion, and lung damage at slaughter vary in their ability to predict an animal’s BRD outcome, and no studies have been published determining how a combination of these BRD indicators may define the number of BRD disease outcome groups. The objectives of the current study were (1) to identify BRD outcome groups using BRD indicators collected during the feeding phase and at slaughter through latent class analysis (LCA) and (2) to determine the importance of these BRD indicators to predict disease outcome. Animals with BRD (n = 127) were identified by visual signs and removed from production pens for further examination. Control animals displaying no visual signs of BRD (n = 143) were also removed and examined. Blood, nasal swab samples, and clinical measurements were collected. Lung and pleural lesions indicative of BRD were scored at slaughter. LCA was applied to identify possible outcome groups. Three latent classes were identified in the best model fit, categorized as non-BRD, mild BRD, and severe BRD. Animals in the mild BRD group had a higher probability of having visual signs of BRD compared with non-BRD and severe BRD animals. Animals in the severe BRD group were more likely to require more than 1 treatment for BRD and have ≥40 °C rectal temperature, ≥10% total lung consolidation, and severe pleural lesions at slaughter. Animals in the severe BRD group were also more likely to be naïve at feedlot entry and the first BRD pull for Bovine Viral Diarrhoea Virus, Bovine Parainfluenza 3 Virus, and Bovine Adenovirus and have a positive nasal swab result for Bovine Herpesvirus Type 1 and Bovine Coronavirus. Animals with severe BRD had 0.9 and 0.6 kg/d lower overall ADG (average daily gain) compared with non-BRD animals and mild BRD animals (P < 0.001). These results demonstrate that there are important indicators of BRD severity. Using this information to predict an animal’s BRD outcome would greatly enhance treatment efficacy and aid in better management of animals at risk of suffering from severe BRD. in feedlots include observing visual signs, measuring rectal temperature, computer-assisted lung auscultation (CALA), tests for pathogen shedding, and evaluating lung damage at slaughter. Prediction of BRD risk is often based on techniques to measure exposure to viruses involved in the BRD complex, as well as using cohort treatment data such as the number of BRD treatments an animal received. These measures, or indicators of disease collected during the feeding phase and at slaughter have been used to define BRD in various forms, usually driven by subjective classification of animals as either sick or healthy based on a predefined cut-off point (White and Renter, 2009; Buczinski et al., 2015) . To date, a combination of these indicators has not been used to differentiate BRD outcome groups defined using unsupervised classification techniques such as latent class analysis (LCA). Additionally, most cohort data collected by feedlots has been used retrospectively to analyze trends in BRD risk (Babcock et al., 2013; Avra et al., 2017) , rather than using detailed individual animal data to predict an animal's BRD outcome during the feeding phase. Use of detailed information collected daily to determine the impact of BRD on individuals will aid in accurate identification of animals that are at greater risk of severe BRD and could aid in more effective management of the disease. The objectives of the current study were (1) to identify BRD outcome groups using BRD indicators collected during the feeding phase and at slaughter through LCA and (2) to determine the importance of these BRD indicators to predict disease outcome. The study had approval from the Animal Ethics Committee of Research Integrity and Ethics Administration, The University of Sydney (Approval # 1118). All methods were carried out in accordance with the relevant guidelines and regulations. The study was conducted at a commercial feedlot in southern New South Wales, Australia using detailed sampling from BRD cases and control animals. Four pens of mixed breed Bos taurus castrated male cattle (n = 898) were inducted into the feedlot for intensive surveillance of BRD in late summer and early autumn of 2017. Animals were sourced from multiple locations, either purchased from saleyards (n = 788) or direct consignment from cattle backgrounding properties (n = 110). Cattle entered the feedlot at 12 to 24 months of age based on dentition assigned at feedlot entry, although exact age was unknown. Processing of animals at feedlot entry was staggered over a 4-wk period based on cattle availability. At feedlot entry, animals had initial body weight (BW) recorded (mean ± SD induction weight; 432 ± 51.2 kg) and were administered treatments at feedlot entry which included a hormonal growth promotant implant (Revalor S; Coopers Animal Health, Macquarie Park, NSW, Australia), vaccination for respiratory diseases caused by Mannheimia haemolytica (Bovilis MH, Coopers Animal Health), modified live intranasal vaccine for Infectious Bovine Rhinotracheitis (Rhinogard, Zoetis Animal Health, NJ), 5-in-1 vaccination for clostridial diseases (Tasvax 5 in 1, Coopers Animal Health), and an antiparasitic injection (Ivermectin 200 µg/kg; Bomectin, Bayer, Leverkusen, Germany). Blood samples were obtained from the tail vein of all animals at feedlot entry for serology to 5 viruses associated with BRD (Bovine Herpesvirus 1, BHV1; Bovine Viral Diarrhoea Virus, BVDV; Bovine Respiratory Syncytial Virus, BRSV; Bovine Parainfluenza Virus 3, BPI3; and Bovine Adenovirus 3; BAdV3). For these samples, 1 × 10 mL EDTA plasma BD vacutainer (BD Vacutainer, Becton, Dickinson and Company, North Ryde, NSW, Australia) for each animal was centrifuged (2,500 × g, 20 min) within 30 min of collection. Plasma from the tube was transferred to separate storage vessels and stored at −20 °C until analysis. Following feedlot induction, animals were designated to 4 production pens for an average of 114 d on feed, with 1 pen designated for each week's intake. Animals were fed to allow for ad libitum feed consumption and were transitioned through 3 starter rations to a steam-flaked barley-based finisher diet over an 18-d period. Detail on ration formulation for the finisher diet has been described previously (Blakebrough-Hall et al., 2020a) . Animals were checked daily by trained feedlot staff for visual signs of BRD, starting from day 1 of the study (the day after the first pen of animals entered the feedlot) until 270 BRD and control animals had been sampled between 2 and 42 d on feed. Animals were scored for visual signs of BRD in the pen by staff using a modified version of the Wisconsin calf scoring chart (Blakebrough-Hall et al., 2020a ). The scoring system included assessment of 7 visual signs: lethargy (slow to move in response to stimulus), head carriage, labored breathing, cough, nasal discharge, ocular discharge, and rumen fill, with each sign assigned a score ranging from 0 to 3, with 3 being the most severe. Animals with visual signs of BRD (n = 127; score > 0 for at least 1 of the visual signs specific to BRD, nasal or ocular discharge, labored breathing or cough) were removed from their pens on the day of observed visual signs and taken for blood sampling and clinical data collection using methods described previously (Blakebrough-Hall et al., 2020a) . For each animal pulled based on visual signs of BRD, a visually healthy control animal exhibiting no visual signs of BRD was removed from the same pen on the same day (n = 143; score 0 for all of the 7 visual signs). Data recorded at the first BRD pull included date, visual identification number, electronic identification number, pen, live weight, rectal temperature, and CALA score. Following initial visual appraisal, animals were treated for BRD based on their rectal temperature and CALA score and Nasal swabs were obtained from all both visually sick cases and visually healthy controls at the first BRD pull for quantitative PCR to test for the BRD-associated viruses: BHV1, BVDV, BRSV, BPI3, and BoCV; Bovine Corona Virus. These samples were stored dry at 4 °C in their collection vessel prior to analysis with no media until analysis up to 1 month after collection. Blood samples were obtained from the tail vein of all animals at the first BRD pull for serology for antibodies to the same BRD-associated causative viruses measured at induction (BHV1, BVDV, BRSV, BPI3, and BAdV). Paired sera were identified form individual animals at the time of induction to test concurrently with blood samples obtained at the first BRD pull. Necropsies of any BRD mortalities were performed by trained feedlot personnel with date and reason of death recorded. These animals were removed from the analysis (n = 16) as they did not have lung and pleural lesion data. All testing of serum and nasal samples were performed at the Centre for Animal Science, Queensland Alliance for Agriculture and Food Innovation, University of Queensland. Serum samples at feedlot entry and the first BRD pull were tested using an indirect multiplex enzyme-linked immunosorbent assay (ELISA; BIOX K 284 ELISA, Bio-X Diagnostics, Rochefort, Belgium). The assay was carried out according to the protocol described by the manufacturer with the modifications described in a previously published study (Hay et al., 2016) . Briefly, the test sera are diluted 1:100 using a buffer and incubated on the plate for 1 hr at 21 °C. The plate was washed and a conjugate in the form of a peroxidase-labeled anti-bovine IgG1 monoclonal antibody added to the wells and the plate is re-incubated at 21 °C for 1 hr. Following the second incubation, the preparation was washed and the chromogenic substrate added. After 10 min, the reaction was stopped and the optical densities at 450 nm read using conventional ELISA plate reader. The test plate was considered valid only if the positive serum yielded a difference in optical density at 10 min that was greater for each valence than BHV1 > 1000; BVDV >1,100; BRSV > 1,100, and BPI3 > 1000 and the negative serum yielded a difference in optical density that <0.300. The raw optical density results for each test plate were exported to a Microsoft Excel spreadsheet and optical densities of the control samples were adjusted for using the formulae specified in the test kit algorithm. Each serological result was categorized as 0 ("seronegative", the category with the lowest optical densities), 1, 2, 3, 4, or 5 (where category 5 consisted of the highest optical densities; Hay et al., 2016) . If an animal was seropositive (>1 serological result) at either induction or time of the first BRD, pull they were considered to be pre-exposed or immune for that particular virus. If an animal was seronegative (0 serological result) at both induction and time of the first BRD pull, they were considered to be naïve for that virus. The QuantiTect Mutiplex RT-PCR kit (Qiagen, MA) was used for real-time, multiplex, 1 step quantitative reverse transcription polymerase chain reaction (RT-qPCR) for analysis of total nucleic acids extracted from nasal swab samples. The assay was conducted using previously described methodology (Horwood and Mahony, 2011) . The real-time PCR primers and probes were designed using Primer Express software (Applied Biosystems, Foster City, CA). Primers were designed with a T m of 60 °C and probes were designed with a T m of 70 °C. Primers and probes were designed within a narrow annealing temperature range to facilitate optimization of the multiplex reaction (Horwood and Mahony, 2011) . The predicted amplicon size was limited to <150 bp for each primer pair. Primers and probes were designed in the most conserved region of the viral genomes. The specific viral genome regions used for each virus are described in further detail in a previous study (Horwood and Mahony, 2011) . Total nucleic acids were extracted from nasal swabs from cattle at the first BRD pull using the DNeasy Blood and Tissue Kit (Qiagen, MD) according to the manufacturer's instructions, except for the omission of the RNAse treatment and stored at −80 °C until analysis. The viral species tested for in this study included isolates of BHV1, BVDV, and BPI3. Two additional BRD viruses BRSV and BCoV were also tested according to an in-house method which has not yet been published (Mahony, personal communication). The 20 μL reaction mix for the assay contained 2 μL of the nucleic acid sample, 200 nM of each primer, 200 nM of each probe, 1 × QuantiTect Multiplex RT-PCR Master Mix (Qiagen), and sterile deionized water (Horwood and Mahony, 2011) . Reactions were conducted with a Corbett RotorGene 3000 with cycling parameters set at 50 °C for 20 min and 95 °C for 15 min followed by 40 cycles of 94 °C for 45 s and 60 °C for 75 s. Results were analyzed using Corbett RotorGene 300 software. Detection limits are described previously (Horwood and Mahony, 2011) . Briefly, TCID 50 values were determined for representative cell culture isolates of BoHV-1 (BHV37), BVDV (MD75) and BPIV-3 (BPI3JCU). Clarified supernatant from the titrated viral suspensions was combined into a single suspension with a final concentration for all of the viruses of 1 × 10 6 TCID 50 / mL (Horwood and Mahony, 2011) . When there qRT-PCR reaction yielded a positive result for any of the viruses of interest, the threshold cycle (Ct) values from qRT-PCR were converted to a categorical score of 0 to 5, with 0 being a negative result and 5 being a large amount of virus present. All animals were sent to a commercial abattoir located ~100 km from the feedlot and slaughtered on the day of arrival or the following morning. All lungs were scored for evidence of pathology by 2 personnel trained by an experienced veterinarian. Lungs were visually and physically examined for degree of consolidation and pleurisy. Lung consolidation was recorded using a previously described scoring method (Theurer et al., 2013) , where the degree of consolidation (lung tissue filled with liquid instead of air) in each lobe was estimated and summed to form a total percentage of lung consolidation. Pleurisy was recorded using a scoring system of 0 to 3 described previously (Blakebrough-Hall et al., 2020a) . The use of the term pleuritic tags refers to the adhesion of the lung to the rib cage by fibrous tags where a score of 3 indicates complete adhesion of the lungs to the thoracic cavity. No lung consolidation score was recorded for animals with a pleurisy score of 3 as there was no lung present on the offal table for scoring. These animals were therefore absent from any analysis of percentage of lung consolidation. Grading occurred on all carcasses ~24 hr after slaughter using the Meat Standards Australia (MSA) grading system by an accredited inspector (Polkinghorne et al., 2008) . Statistical analyses were performed using the software package SAS (SAS version 9.4, SAS Institute, NC). LCA was used to determine the number of latent classes by grouping animals with similar outcomes based on 16 indicators of BRD. All 16 BRD indicators were transformed to a dichotomous outcome (Table 1) . Cut-off points used to determine the response category for rectal temperature and CALA score were based on cut-off points used in previous studies and as commonly used in the industry (Schaefer et al., 2012; Mang et al., 2015; Nickell et al., 2018) . Cut-off points of 10% lung consolidation, pleural lesion score ≤2 and number of BRD treatments ≤2 were determined based on results from a previously published companion study (Blakebrough-Hall et al., 2020a) . The LCA was used to determine the number of underlying categorical latent classes with mutually exclusive levels of the variables. Models with 2 to 5 latent classes were obtained, and the best model was selected based on fit statistics, Akaike's information criterion (AIC), Bayesian information criterion (BIC), and the likelihoodratio G 2 statistic (Lanza et al., 2007) as well as the entropy. Lower AIC, BIC, G 2 , and entropy values reflect a better model. Model interpretability was also considered when assessing the optimal model to ensure that each latent class was distinguishable from the others on the basis of item response probabilities, no latent class had a near-zero probability of class membership and that a meaningful label could be assigned to each class (Lanza et al., 2007) . Two parameters were estimated; the number of animals belonging to each latent class (a priori probability that a selected animal was in each class) and the conditional class membership probabilities which define the distribution of the responses to each question within each class. Each animal was allocated to a single latent class with the highest a posteriori probability of membership. The effect of the covariates in-weight, exit weight, pen, days on feed at the first BRD pull, ADG to the first BRD pull, and overall ADG (ADG over length of feeding phase) on the probability of class membership in the LCA was assessed using logistic regression. Only statistically significant (P < 0.05) covariates remained in the model. CALA, score 1 to 3 Lung auscultation score < 2 (n = 101) Lung auscultation score ≥ 2 (n = 169) Lung consolidation, % <10% total lung consolidated 1 (n = 203) ≥10% total lung consolidated (n = 22) Pleural lesions, score 1 to 3 Score ≤ 2 (n = 221) Score 3 (n = 49) BHV1 serostatus Antibody positive at either induction or first BRD pull (pre-exposed; n = 110) Seronegative at both induction and the first BRD pull (naïve; n=134) BVDV serostatus Antibody positive at either induction or first BRD pull (pre-exposed; n = 168) Seronegative at both induction and the first BRD pull (naïve; n = 77) BRSV serostatus Antibody positive at either induction or the first BRD pull (pre-exposed; n = 240) Seronegative at both induction and the first BRD pull (naïve; n = 6) BPI3 serostatus Antibody positive at either induction or first BRD pull (pre-exposed; n = 229) Seronegative at both induction and the first BRD pull (naïve; n = 17) BAdV3 serostatus Antibody positive at either induction or first BRD pull (pre-exposed; n = 223) Seronegative at both induction and the first BRD pull (naïve; n = 23) BHV1 RT-qPCR swab result at the first BRD pull Mixed-effects linear regression models with the MIXED procedure in SAS (SAS Inst. Inc., Cary, NC) were used to estimate differences between the latent classes on animal performance outcomes. Latent class and breed were included as fixed effects. Induction weight was included as a covariate for ADG to the first BRD pull and carcass weight. Pen was included as a random effect. Where breed was found to be nonsignificant (P > 0.05), it was removed from the models. Significance was declared at P ≤ 0.05, and means were separated using Bonferroni adjustment for multiple comparisons. The performance and clinical characteristics of the cohort are presented in Table 2 . The average days on feed an animal was first pulled for BRD was 21 and animals had an ADG to first BRD pull of 1.2 kg/d. The majority of the 270 animals (84.4%) received either 0 or 1 treatment for BRD. Of the 270 animals, 77 received tulathromycin and 101 received tilmicosin at the first pull, with the remainder receiving no antimicrobial treatment due to having a low rectal temperature or CALA score. A large proportion of animals identified visually for BRD had a rectal temperature ≥40 °C (68.2%) and a high CALA score (62.6%). Only 9.8% of all animals had ≥10% total lung consolidation at slaughter, whereas 18.2% had severe pleural lesions with lung tissue adhesion to the rib cage. Approximately half of animals showed pre-exposure to BHV1 with the remaining 53.0% of animals seronegative at induction and pulling despite vaccination upon arrival. Approximately twothirds of animals were pre-exposed to BVDV and the vast majority (~90%) had been pre-exposed to the other 3 respiratory viruses. BHV1 was the virus most frequently detected virus by PCR (9.3%) at the first BRD pull and no more than 3% of the animals tested positive for any of the other 4 viruses. A model with 3 latent classes was the optimal baseline model based on the AIC (lowest in the 3 and 4 class model) and entropy values (lowest in the 3 class model; Table 3 ). Additionally, the classes were easily distinguishable from each other on the basis of the response probabilities, no latent class had a near-zero probability of class membership and a meaningful label could be assigned to each class (Table 4) . Just over half (52%) of animals were assigned to latent class one, "non-BRD", which had a low likelihood of the presence of any of the indicators characteristic of BRD such as visual signs, high rectal temperature, lung consolidation, or pleural lesions ( Table 4 ). The non-BRD class also showed lower likelihood of being seronegative to BHV1 compared with the other 2 classes despite the fact that the seroprevalence of antibodies to BHV1 was 58% (Table 4 ). Animals in latent classes 2 and 3 corresponded to mild and severe BRD, respectively. Animals in the mild BRD class accounted for 40% of the cohort, which had a greater likelihood of having visual signs of BRD, an elevated CALA score, and being seronegative to BHV1 compared with the non-BRD animals. Animals in this latent class were also less likely to be treated more than once for BRD, have high rectal temperature, lung consolidation ≥10%, and a score of 3 for pleural lesions compared with class 3 (severe BRD). These animals also had lower probability of being seronegative to BVDV, BPI3, and BAdV compared with those classified with severe BRD. Animals classified with mild BRD had a higher likelihood for nasal swabs to test positive for BHV1 or BCoV at compared with those classified as severe BRD (Table 4 ). The cohort included 8% of animals classified with severe BRD, which was characterized by a high likelihood of being treated for BRD more than once, having a rectal temperature ≥40 °C, lung consolidation >10%, score 3 for pleural lesions, and being seronegative for BHV1, BVDV, BPI3, and BAdV. These animals were also more likely to have a positive nasal swab result at the first BRD pull for BRSV, BPI3, and BCoV compared with animals in the mild BRD class. In the latent class model, the covariates intake weight, exit weight, days on feed at the first BRD pull did not significantly affect latent class membership (P > 0.05); however, ADG to the first BRD pull and overall ADG were strong predictors of latent classes for BRD (P < 0.001; Table 5 ). The odds of an animal to belong to the mild or severe BRD group was 1.27 and 1.37 times greater, respectively, for every 1 kg reduction in ADG to the first BRD pull compared with non-BRD animals. The odds of an animal to belong to the mild or severe BRD group were 1.65 and 7.14 times greater, respectively, for every 1 kg reduction in overall ADG compared with the non-BRD animals. Animals classified with mild BRD had reduced production performance compared with non-BRD animals (P < 0.001; Table 6 ); however, production was not as compromised in the mild animals compared with the severe animals. Severe BRD animals had significantly reduced performance compared with mild BRD animals for all variables except for initial intake weight (P > 0.05). Animals in the severe BRD class gained 0.6 kg/d less than those in the mild BRD class and 0.9 kg/d less than animals in the non-BRD class (P < 0.001). Exit weight and carcass weight were 71.5 and 39.0 kg lower in severe BRD animals compared with mild BRD animals, and 129.9 and 71.1 kg lower compared with non-BRD animals (P < 0.001). MSA marble score was 46.5 lower in animals with severe BRD compared with mild BRD (P < 0.001). The current study aimed to differentiate categories of BRD severity based on 16 indicators of disease using LCA, and then determine which of these BRD indicators were most important in assigning animals to a BRD group. It is important to point The 3 class model was selected as the optimal model indicated in bold. out that while the 16 mortalities were removed from the final dataset due to missing lung data, these animals were included in the initial analysis; however, including these animals did not affect the number of latent classes and therefore the decision was made to remove them from the final dataset. The latent class model showed the best fit with 3 latent classes, which differentiated mild BRD and severe BRD, as well as animals that were not impacted by BRD. In this cohort, animals with mild BRD were characterized by a high likelihood of visual signs of BRD identified by trained pen riders but these animals did not sustain permanent lung damage at slaughter, indicating either a lesssevere infection or a potentially successful immune response to the disease. In comparison, severe BRD was characterized by a high likelihood of requiring more than 1 treatment for BRD and reduced weight gain, with these animals sustaining permanent lung damage at slaughter. Interestingly, rectal temperature and CALA which are commonly used confirmation measures in feedlots, showed much lower importance to define BRD outcome group than visual signs, the number of BRD treatments an animal received, and lung consolidation and pleural lesions at slaughter. Animals with mild BRD exhibited visual signs of BRD, were much less likely to require more than 1 treatment for BRD, and show evidence of lung damage at slaughter. These results suggest that those mild BRD animals responded to an initial BRD treatment which limited disease progression. These results highlight the importance of early recognition and treatment of BRD to increase recovery rates and productivity, and reduce the economic costs that are associated with increasing disease severity (Blakebrough-Hall et al., 2020b) . Visual signs had the highest influence on class membership of all BRD indicators. This is interesting considering many studies report the inaccuracy of visual signs to detect BRD when comparing lung damage detected at slaughter (White and Renter, 2009) . Results from the present study may indicate that the lack of agreement between visual signs and lung damage at slaughter is due to the fact animals recover following early recognition and treatment, and therefore do not sustain permanent lung damage, although the potential that these animals are false positives cannot be ruled out. While these transient BRD infections may still have impacts on production, they do not necessarily cause longterm pulmonary pathology recognizable at slaughter. Therefore, this could suggest that pen riders that can accurately identify signs of BRD early are important in limiting the impacts of severe BRD infections in feedlots. It is important to point out that more research is needed to be able to determine if the mild animals were in fact diagnosed earlier in the disease process than the severe animals, thereby limiting disease progression. The fact that the severe animals had lower ADG compared with mild animals is an indication that this may be the case. The predictive performance of visual signs in the current study is also likely in part because the surveillance animals were part of a sampling trial and therefore observation of visual signs in these animals may have been more thorough than in a normal non-study situation. Additionally, it is worth noting that the pen riders used in the present study were experienced at identifying BRD animals, all with more than 1 year's experience pen riding. Requiring more than 1 BRD treatment and severe lung damage at slaughter were important indicators that differentiated the severe animals from the mild animals. However, assessment of lung damage at slaughter is only useful as a retrospective indicator of BRD outcome, which limits its use as a predictive tool for BRD antemortem. Animals with high rectal temperatures were also more likely to belong to the severe BRD class, although this difference was not as pronounced as some of the other indicators. Rectal temperature is one of the most widely used BRD confirmation tools in feedlots triggering treatment protocols, and these results demonstrate that animals with rectal temperature ≥40 °C at the first BRD pull are at a greater risk of not responding to an initial BRD treatment. In agreement with these results, increased rectal temperature was found to be predictive of increased risk of retreatment and mortality due to BRD in a previous study (DeDonder et al., 2010) . Measurement of rectal temperature following initial visual detection appears to show some merit as a tool to indicate disease severity, however using rectal temperature alone could still result in misclassifying 21.2% of animals in the mild group and therefore is not overly accurate as a standalone indicator of disease severity. Of interest was that a higher CALA score at the first BRD pull was more likely in animals with mild BRD compared with those with severe BRD. The reasons for this are unknown and appear to be inconsistent with the findings of a previous study which found that the probability of requiring re-treatment for BRD was 13% lower in animals with normal CALA scores compared with those with more severe scores, and animals with higher CALA scores had a 63% probability of retreatment (DeDonder et al., 2010) . Additionally, more severe CALA scores were also associated with increased risk of death due to BRD (DeDonder et al., 2010) . It is worth noting however that the lung auscultation scoring system these authors used differed to that of the present study, with scores ranging from 1 (normal) to 10 (diffuse, severe adventitious lung sounds) which were subjectively scored rather than using the Whisper computer program. The difference in the CALA severity scores could therefore be the reason for the different findings between studies. A potential limitation of the present study was the use of 2 different treatments for BRD based on the severity of the clinical signs rectal temperature and CALA score. It could be argued that using 2 different treatments for BRD may confound the results Within a row, means without a common superscript letter differ (P < 0.05). due to the success of those treatments. These clinical signs were included in the latent class model to test for differences in severity among classes. Tulathromycin was administered to more severe BRD animals which may have improved the recovery of animals compared with Tilmicosin. Under this scenario, animals with severe signs may have been less likely to be pulled for a second or third time. Conversely, if Tulathromycin was less effective compared with Tilmicosin then those animals may have been more likely to be pulled for a second or third time for treatment. The number of treatments an animal received was included as a measure of treatment success in the latent class model, and was not necessarily confounded with clinical signs or treatment type. This factor seemed to have allowed the distinction between latent classes. Seronegativity indicating naivety to BHV1 and BVDV increased an animal's likelihood of developing BRD. The relationship between initial BRSV titers and BRD risk have been inconsistent (Fulton et al., 2002; Hay et al., 2016) but naivety at feedlot entry for BHV1, BVDV, and BPI3 appears to increase the risk of developing BRD (O'Connor et al., 2001; Hay et al., 2016) . Additionally, it has been found that animals that are naïve or seronegative for more than 1 virus at feedlot entry are at progressively greater risk of developing BRD (Hay et al., 2016) . This was the case in the current study, where the more severe BRD cases were more likely to be naive to multiple viruses. These results support the use of adequate backgrounding and vaccination programs prior to feedlot entry for protection against more severe infections. Having said this, the fact that 53.0% of animals were still BHV1 naïve despite being vaccinated at feedlot entry may question the effectiveness of a single vaccination at feedlot entry. In a previous study, 39.1% of animals vaccinated with Rhinoguard for BHV1 at induction did not subsequently seroconvert for BHV1 (Hay et al., 2016) . The same study also found that even after vaccination with Rhinoguard, initially seronegative animals were at increased risk of developing BRD in the feedlot. This could be because immunologically stressed animals may be unable to mount an effective immune response early enough following on-arrival vaccination. A study comparing on-arrival vaccination for BHV1 with delayed vaccination (14 d post arrival) found that delayed vaccination improved the acquired immune response (Richeson et al., 2008) . Alternatively, intranasal vaccination may not be sufficient to trigger an antibody response detectable by the ELISA. These findings may support the use of delayed vaccination following feedlot arrival or multiple vaccination programs during backgrounding and prior to feedlot entry. The presence of viral pathogens in nasal swab samples taken at the first BRD pull was generally not a good indicator of disease severity; however, this was dependent on the virus. This may have been due to the small proportion of animals with a positive nasal swab result for any BRD virus at the time of the first BRD pull (9.3% for BHV1, 3.0% for BVDV, 3.0% for BRSV, 0.4% for BPI3, and 2.6% for BCoV). This was a limitation of the present study because only 1 sample was collected from animals at the time when visual signs indicated BRD infection. Most viral infections are thought to resolve at around 14 d, despite a peak in visual signs of BRD occurring at around 21 d postinfection (Cusack et al., 2003) . Therefore, animals pulled with visual signs may have already resolved the viral infection by this time and consequently may not have returned a positive nasal swab result. It is also generally accepted that the overt visual signs of BRD are more often associated with secondary bacterial and inflammatory responses compared with the direct viral infection (Griffin et al., 2010; Baruch et al., 2019) . Bacterial pathogens were not measured in the current study as the focus was on the initiating pathogens so this relationship could not be explored. Interestingly, animals with mild BRD had greater likelihood of being positive for BHV1 compared with severe BRD animals. A possible explanation for this was that animals in the mild BRD class were pulled earlier in the disease progression timeline and therefore had a higher likelihood of shedding BHV1; however, subsequently recovered following treatment. Additionally, there is a possibility animals returned a positive swab result for BHV1 following the modified live virus vaccination at feedlot entry (van Drunen Littel-van Den Hurk et al., 2001; Kleiboeker et al., 2003) . Animals with severe BRD were more likely to be positive for BRSV and BCoV at the first BRD pull. This appears to be contrary to previous observations that detection of infection with shedding of specific pathogens does not equate to clinical disease and lung infection requiring treatment (Fulton and Confer, 2012) . These results demonstrate the importance of novel viruses such as BCoV on the etiology of BRD in feedlot cattle and the need for continuous surveillance of pathogens including new microorganisms that may be involved in the pathogenesis of BRD (Hick et al., 2012) . The occurrence of BRD and its severity had a large influence on average daily gain to the first BRD pull and overall ADG in the latent class model. Additionally, production performance outcomes such as ADG to first BRD pull, overall ADG, exit weight, carcass weight, and MSA marbling decreased as disease severity increased. The negative economic outcomes associated with decreased production performance have been demonstrated previously (Schneider et al., 2009; Blakebrough-Hall et al., 2020b) . However, the strong linear influence of animal performance on BRD class membership using LCA is novel. Animals suffering from severe, sustained infection were more likely to have reduced weight gain impacting carcass weight at slaughter, as well as reduced carcass quality traits such as marbling, compared with those animals that never suffered from BRD or suffered from a milder infection. This confirms the need to focus both on reducing overall disease incidence affecting production and profitability, as well as to regularly monitor production parameters such as ADG throughout the feeding phase. The present study confirms that visual signs are an important indicator to identify animals impacted by BRD provided the pen riders are sufficiently trained to identify these signs early. Therefore, emphasis should be placed on training of new pen riders for accurate visual identification, as well as in efforts to retain experienced pen riders which seems to be an industrywide issue. Additionally, early initial treatment for BRD can potentially reduce the progression of infection and limit severity. In contrast, animals that require more than 1 treatment for BRD appear to be cases of more severe infection resulting in permanent lung damage at slaughter. Lung damage at slaughter was a good predictor of BRD class membership between mild and severe animals, indicating the importance of health feedback information from abattoirs to producers and better technologies to detect lung lesions on live animals in order to manage disease severity. Pre-exposure to BRD viruses reduced the likelihood of both mild and severe BRD, indicating the importance of adequate backgrounding procedures to reduce BRD severity in feedlots. Reductions in animal performance were also seen with increasing BRD severity, translating to significant economic losses for feedlots (Blakebrough-Hall et al., 2020b) . These performance indicators can be easily monitored throughout the feeding phase and used as a simple and practical tool to predict and manage BRD severity and reduce production losses. The study had approval from the Animal Ethics Committee of Research Integrity and Ethics Administration, The University of Sydney (Approval # 1118). All methods were carried out in accordance with the relevant guidelines and regulations. Not applicable. A retrospective analysis of risk factors associated with bovine respiratory disease treatment failure in feedlot cattle Predicting cumulative risk of bovine respiratory disease complex (BRDC) using feedlot arrival data and daily morbidity and mortality counts Performance of multiple diagnostic methods in assessing the progression of bovine respiratory disease in calves challenged with infectious bovine rhinotracheitis virus and Mannheimia haemolytica1 Diagnosis of bovine respiratory disease in feedlot cattle using blood 1H NMR metabolomics An evaluation of the economic effects of bovine respiratory disease on animal performance, carcass traits and economic outcomes in feedlot cattle defined using four BRD diagnosis methods Bayesian estimation of the accuracy of the calf respiratory scoring chart and ultrasonography for the diagnosis of bovine respiratory disease in pre-weaned dairy calves The medicine and epidemiology of bovine respiratory disease in feedlots Lung auscultation and rectal temperature as a predictor of lung lesions and bovine respiratory disease treatment outcome in feedyard cattle Control methods for bovine respiratory disease for feedlot cattle Laboratory test descriptions for bovine respiratory disease diagnosis and their strengths and weaknesses: gold standards for diagnosis, do they exist? Bovine Viral Diarrhea Virus (BVDV) 1b: predominant BVDV subtype in calves with respiratory disease Bacterial pathogens of the bovine respiratory disease complex Associations between exposure to viruses and bovine respiratory disease in Australian feedlot cattle Coronavirus infection in intensively managed cattle with respiratory disease Multiplex real-time RT-PCR detection of three viruses associated with the bovine respiratory disease complex Evaluation of shedding of bovine herpesvirus 1, bovine viral diarrhea virus 1, and bovine viral diarrhea virus 2 after vaccination of calves with a multivalent modified-live virus vaccine PROC LCA: A SAS procedure for latent class analysis Evaluation of a computer-aided lung auscultation system for diagnosis of bovine respiratory disease in feedlot cattle Retrospective evaluation of clinical outcomes among cattle evaluated with a computer-aided lung auscultation system at the time of bovine respiratory disease diagnosis The relationship between the occurrence of undifferentiated bovine respiratory disease and titer changes to Haemophilus somnus and Mannheimia Ontario feedlots Current usage and future development of the Meat Standards Australia (MSA) grading system Effects of on-arrival versus delayed modified live virus vaccination on health, performance, and serum infectious bovine rhinotracheitis titers of newly received beef calves The noninvasive and automated detection of bovine respiratory disease onset in receiver calves using infrared thermography An evaluation of bovine respiratory disease complex in feedlot cattle: impact on performance and carcass traits using treatment records and lung lesion scores The epidemiology of bovine respiratory disease: what is the evidence for predisposing factors? Effect of Mannheimia haemolytica pneumonia on behavior and physiologic responses of calves during high ambient environmental temperatures Identification of a mutant Bovine Herpesvirus-1 (BHV-1) in post-arrival outbreaks of IBR in feedlot calves and protection with conventional vaccination Bayesian estimation of the performance of using clinical observations and harvest lung lesions for diagnosing bovine respiratory disease in post-weaned beef calves The authors would like to acknowledge Dr Kevin Sullivan, Bell Veterinary Services, for his assistance with the lung scoring training and protocol and Timothy Mahony, Centre for Animal Science Queensland Alliance for Agriculture and Food Innovation, for his assistance with interpretation of viral data results and methodology. This research was funded by Meat and Livestock Australia. The authors declare no real or perceived conflicts of interest.Authors' contributions C.B.H. was involved in the study design, undertook all data collection and analysis and wrote the manuscript. P.H. assisted in the study design and analysis of the data, particularly in regards to the use of the viral data and reviewed the final manuscript. L.G. assisted with study design, data analysis, and review of the final manuscript. All authors read and approved the final manuscript. All data generated or analyzed during this study are included in this published article.