key: cord-022633-fr55uod6 authors: nan title: SAEM Abstracts, Plenary Session date: 2012-04-26 journal: Acad Emerg Med DOI: 10.1111/j.1553-2712.2012.01332.x sha: doc_id: 22633 cord_uid: fr55uod6 nan Objectives: We sought to determine if the OCP policy resulted in a meaningful and sustained improvement in ED throughput and output metrics. Methods: A prospective pre-post experimental study was conducted using administrative data from 15 community and tertiary centers across the province. The study phases consisted of the 8 months from February to September 2010 compared against the same months in 2011. Operational data for all centres were collected through the EDIS tracking systems used in the province. The OCP included 3 main triggers: ED bed occupancy >110%, at least 35% of ED stretchers blocked by patients awaiting inpatient bed or disposition decision, and no stretcher available for high acuity patients. When all criteria were met, selected boarded patients were moved to an inpatient unit (non-traditional care space if no bed available). The primary outcome was ED length of stay (LOS) for admitted patients. The ED load of boarded patients from 10-11 am was reported The editors of Academic Emergency Medicine (AEM) are honored to present these abstracts accepted for presentation at the 2012 annual meeting of the Society for Academic Emergency Medicine (SAEM), May 9 to 12 in Chicago, Illinois. These abstracts represent countless hours of labor, exciting intellectual discovery, and unending dedication by our specialty's academicians. We are grateful for their consistent enthusiasm, and are privileged to publish these brief summaries of their research. This year, SAEM received 1172 abstracts for consideration, and accepted 746. Each abstract was independently reviewed by up to six dedicated topic experts blinded to the identity of the authors. Final determinations for scientific presentation were made by the SAEM Program Scientific Subcommittee co-chaired by Ali S. Raja, MD, MBA, MPH and Steven B. Bird, MD, and the SAEM Program Committee, chaired by Michael L. Hochberg, MD. Their decisions were based on the final review scores and the time and space available at the annual meeting for oral and poster presentations. There were also 125 Innovation in Emergency Medicine Education (IEME) abstracts submitted, of which 37 were accepted. The IEME Subcommittee was co-chaired by JoAnna Leuck, MD and Laurie Thibodeau, MD. We present these abstracts as they were received, with minimal proofreading and copy editing. Any questions related to the content of the abstracts should be directed to the authors. Presentation numbers precede the abstract titles; these match the listings for the various oral and poster sessions at the annual meeting in Chicago, as well as the abstract numbers (not page numbers) shown in the key word and author indexes at the end of this supplement. All authors attested to institutional review board or animal care and use committee approval at the time of abstract submission, when relevant. Abstracts marked as ''late-breakers'' are prospective research projects that were still in the process of data collection at the time of the December abstract deadline, but were deemed by the Scientific Subcommittee to be of exceptional interest. These projects will be completed by the time of the annual meeting; data shown here may be preliminary or interim. On behalf of the editors of AEM, the membership of SAEM, and the leadership of our specialty, we sincerely thank our research colleagues for these contributions, and their continuing efforts to expand our knowledge base and allow us to better treat our patients. David Background: Two to ten percent of patients evaluated in the emergency departments (ED) present with altered mental status (AMS). The prevalence of non-convulsive seizure (NCS) and other electroencephalographic (EEG) abnormalities in this population is not known. This information is needed to make recommendations regarding the routine use of emergent EEG in AMS patients. Objectives: To identify the prevalence of NCS and other EEG abnormalities in ED patients with AMS. Methods: An ongoing prospective study at two academic urban ED. Inclusion: Patients ‡ 13 years old with AMS. Exclusion: An easily correctable cause of AMS (e.g. hypoglycemia, opioid overdose). A 30-minute EEG with the standard 19 electrodes was performed on each subject as soon as possible after presentation (usually within 1 hour). Outcome: The rate of EEG abnormalities based on blinded review of all EEGs by two boardcertified epileptologists. Descriptive statistics are used to report EEG findings. Frequencies are reported as percentages with 95% confidence intervals (CI), and inter-rater variability is reported with kappa. Results: The interim analysis was performed on 130 consecutive patients (target sample size: 260) enrolled from May to October 2011 (median age: 61, range 13-100, 40% male). EEGs for 20 patients were reported uninterpretable by at least one rater (6 by both raters). Of the remaining 110, only 30 (27%, 95%CI 20-36%) were normal according to either rater (n = 15 by both). The most common abnormality was background slowing (n = 75, 68%, 95%CI 59-76%) by either rater (n = 47 by both), indicating underlying encephalopathy. NCS was diagnosed in 8 patients (7%, 95%CI, 4-14%) by at least one rater (n = 4 by both), including 6 (5%, 95%CI 2-12%) patients in non-convulsive status epilepticus (NCSE). 29 patients (26%,95%CI 19-35%) had interictal epileptiform discharges read by at least one rater (n = 12 by both) indicating cortical irritability and an increased risk of spontaneous seizure. Inter-rater reliability for EEG interpretations was modest (kappa: 0.53, 95%CI 0.39-0.67). Objectives: To define diagnostic SBI and non-bacterial (non-SBI) biosignatures using RNA microarrays in febrile infants presenting to emergency departments (EDs). Methods: We prospectively collected blood for RNA microarray analysis in addition to routine screening tests including white blood cell (WBC) counts, urinalyses, cultures of blood, urine, and cerebrospinal fluid, and viral studies in febrile infants 60 days of age in 22 EDs . We defined SBI as bacteremia, urinary tract infection (UTI), or bacterial meningitis. We used class comparisons (Mann-Whitney p < 0.01, Benjamini for MTC and 1.25 fold change filter), modular gene analysis, and K-NN algorithms to define and validate SBI and non-SBI biosignatures in a subset of samples. Results: 81% (939/1162) of febrile infants were evaluated for SBI. 6.8% (64/939) had SBI (14 (1.5%) bac-teremia, 56 (6.0%) UTIs, and 4 (0.4%) bacterial meningitis). Infants with SBIs had higher mean temperatures, and higher WBC, neutrophil, and band counts. We analyzed RNA biosignatures on 141 febrile infants: 35 SBIs (2 meningitis, 5 bacteremia, 28 UTI), 106 non-SBIs (49 influenza, 29 enterovirus, 28 undefined viral infections), and 11 healthy controls. Class comparisons identified 1,288 differentially expressed genes between SBIs and non-SBIs. Modular analysis revealed overexpression of interferon related genes in non-SBIs and inflammation related genes in SBIs. 232 genes were differently expressed (p < 0.01) in each of the three non-SBI groups vs SBI group. Unsupervised cluster analysis of these 232 genes correctly clustered 91% (128/141) of non-SBIs and SBIs. K-NN algorithm identified 33 discriminatory genes in training set (30 non-SBIs vs 17 SBIs) which classified an independent test (76 non-SBIs vs 18 SBIs) with 87% accuracy. Four misclassified SBIs had over-expression of interferon-related genes, suggesting viral-bacterial co-infections, which was confirmed in one patient. Background: Improving maternal, newborn, and child health (MNCH) is a leading priority worldwide. However, limited frontline health care capacity is a major barrier to improving MNCH in developing countries. Objectives: We sought to develop, implement, and evaluate an evidence-based Maternal, Newborn, and Child Survival (MNCS) package for frontline health workers (FHWs). We hypothesized that FHWs could be trained and equipped to manage and refer the leading MNCH emergencies. Methods: SETTING -South Sudan, which suffers from some of the world's worst MNCH indices. ASSESSMENT/INTERVENTION -A multi-modal needs assessment was conducted to develop a best-evidence package comprised of targeted trainings, pictorial checklists, and reusable equipment and commodities ( Figure 1 ). Program implementation utilized a trainingof-trainers model. EVALUTION -1) Pre/post knowledge assessments, 2) pre/post objective structured clinical examinations (OSCEs), 3) focus group discussions, and 4) closed-response questionnaires. Results: Between Nov 2010 to Oct 2011, 72 local trainers and 708 FHWs were trained in 7 of the 10 states in South Sudan. Knowledge assessments among trainers (n = 57) improved significantly from 62.7% (SD 20.1) to 92.0% (SD 11.8) (p < 0.001). Mean scores a maternal OSCE and a newborn OSCE pre-training, immediately post-training, and upon 2-3 month follow-up are shown in the table. Closed-response questionnaires with 54 FHWs revealed high levels of satisfaction, use, and confidence with MNCS materials. Participants reported an average of 3.0 referrals (range 0-20) to a higher level of care in the 2-3 months since training. Furthermore, 78.3% of FHWs were more likely to refer patients as a result of the training program. During seven focus group discussions with trained FHWs, respondents (n = 41) reported high satisfaction with MNCS trainings, commodities, and checklists, with few barriers to implementation or use. Conclusion: These findings suggest MNCS has led to improvements in South Sudanese FHWs' knowledge, skills, and referral practices with respect to appropriate management of MNCH emergencies. No study has compared various lactate measurements to determine the optimal parameter to target. Objectives: To compare the association of blood lactate kinetics with survival in patients with septic shock undergoing early quantitative resuscitation. Methods: Preplanned analysis of a multicenter EDbased RCT of early sepsis resuscitation targeting three physiological variables: CVP, MAP, and either central venous oxygen saturation or lactate clearance. Inclusion criteria: suspected infection, two or more SIRS criteria, and either SBP <90 mmHg after a fluid bolus or lactate >4 mmol/L. All patients had an initial lactate measured with repeat at two hours. Normalization of lactate was defined a lactate decline to <2.0 mmol/L in a patient with an intial lactate ‡2.0. Absolute lactate clearance (initial -delayed value), and relative ((absolute clearance)/(initial value)*100) were calculated if the initial lactate was ‡2.0. The outcome was in-hospital survival. Receiver operating characteristic curves were constructed and areas under the curve (AUC) were calculated. Difference in proportions of survival between the two groups at different lactate cutoffs were analyzed using 95% CI and Fisher exact tests. Results: Of 272 included patients, the median initial lactate was 3.1 mmol/L (IQR 1.7, 5.8), and the median absolute and relative lactate clearance were 1 mmol/L (IQR 0.3, 2.5) and 37% (IQR 14, 57 ). An initial lactate >2.0 mmol/L was seen in 187/272 (69%), and 68/187 (36%) patients normalized their lactate. Overall sutures on trunk and extremity lacerations that present in the ED. The use of absorbable sutures in the ED setting confers several advantages: patients do not need to return for suture removal which results in a reduction in ED crowding, ED wait times, missed work or school days, and stressful procedures (suture removal) for children. Objectives: The primary objective of this study is to compare the cosmetic outcome of trunk and extremity lacerations repaired using absorbable versus nonabsorbable sutures in children and adults. A secondary objective is to compare complication rates between the two groups. Methods: Eligible patients with lacerations were randomly allocated to have their wounds repaired with Vicryl Rapide (absorbable) or Prolene (nonabsorbable) sutures. At a 10 day follow-up visit the wounds were evaluated for infection and dehiscence. After 3 months, patients were asked to return to have a photograph of the wound taken. Two blinded plastic surgeons using a previously validated 100 mm visual analogue scale (VAS) rated the cosmetic outcome of each wound. A VAS score of 15 mm or greater was considered to be a clinically significant difference. Results: Of the 100 patients enrolled, 45 have currently completed the study including 19 in the Vicryl Rapide group and 26 in the Prolene group. There were no significant differences in the age, race, sex, length of wound, number of sutures, or layers of repair in the two groups. The observer's mean VAS for the Vicryl Rapide group was 55.76 mm ) and that for the Prolene group was 55.9 mm (95%CI 44.77-67.03), resulting in a mean difference of 0.14 mm (95%CI-16.95 to 17.23, p = .98). There were no significant differences in the rates of infection, dehiscence, or keloid formation between the two groups. Conclusion: The use of Vicryl Rapide instead of nonabsorbable sutures for the repair of lacerations on the trunk and extremities should be considered by emergency physicians as it is an alternative that provides a similar cosmetic outcome. Objectives: To determine the relationship between infection and time from injury to closure, and the characteristics of lacerations closed before and after 12 hours of injury. Methods: Over an 18 month period, a prospective multi-center cohort study was conducted at a teaching hospital, trauma center and community hospital. Emergency physicians completed a structured data form when treating patients with lacerations. Patients were followed to determine whether they had suffered a wound infection requiring treatment and to determine a cosmetic outcome rating. We compared infection rates and clinical characteristics of lacerations with chisquare and t-tests as appropriate. Results: There were 2663 patients with lacerations; 2342 had documented times from injury to closure. The mean times from injury to repair for infected and noninfected wounds were 2.4 vs. 3.0 hrs (p = 0.39) with 78% of lacerations treated within 3 hours and 4% (85) treated 12 hours after injury. There were no differences in the infection rates for lacerations closed before (2.9%, 95%CI 2.2-3.7) or after (2.1%, 95%CI 0.4-6.0) 6 hours and before (3.0%, 95% CI 2.3%-3.8%) or after (1.2%, 95% CI 0.03%-6.4%) 12 hours. The patients treated 12 hours after injury tended to be older (41 vs. 34 yrs p = 0.02) and fewer were treated with primary closure (85% vs. 96% P < 0.0001). Comparing wounds 12 or more hours after injury with more recent wounds, there was no effect of location on decision to close. Wounds closed after 12 hours did not differ from wounds closed before 12 hours with respect to use of prophylactic antibiotics, type of repair, length of laceration, or cosmetic outcome. Conclusion: Closing older lacerations, even those greater than 12 hours after injury, does not appear to be associated with any increased risk of infection or adverse outcomes. Excellent irrigation and decontamination over the last 30 years may have led to this change in outcome. Background: Deep burns may result in significant scarring leading to aesthetic disfigurement and functional disability. TGF-b is a growth factor that plays a significant role in wound healing and scar formation. Objectives: The current study was designed to test the hypothesis that a novel TGF-b antagonist would reduce scar contracture compared with its vehicle in a porcine partial thickness burn model. Methods: Ninety-six mid-dermal contact burns were created on the backs and flanks of four anesthetized young swine using a 150 gm aluminum bar preheated to 80°Celsius for 20 seconds. The burns were randomized to treatment with topical TGF-b antagonist at one of three concentrations (0, 187, and 375 lL) in replicates of 8 in each pig. Dressing changes and reapplication of the topical therapy were performed every 2 days for 2 weeks then twice weekly for an additional 2 weeks. Burns were photographed and full thickness biopsies were obtained at 5, 7, 9, 14, and 28 days to determine reepithelialization and scar formation grossly and microscopically. A sample of 32 burns in each group had 80% power to detect a 10% difference in percentage scar contracture. Results: A total of 32 burns were created in each of the three study groups. Burns treated with the high dose TGF-b antagonist healed with less scar contracture than those treated with the low dose and control (52 ± 20%, 63 ± 15%, and 62 ± 14%; ANOVA P = 0.02). Additionally, burns treated with the higher, but not the lower dose of TGF-b antagonist healed with significantly fewer full thickness scars than controls (62.5% vs. 100% vs. 93.8% respectively; P < 0.001). There were no infections and no differences in the percentage wound reepithelialization among all study groups at any of the time points. Conclusion: Treatment of mid-dermal porcine contact burns with the higher dose TGF-b antagonist reduced scar contracture and rate of deep scars compared with the low dose and controls. Background: Diabetic ketoacidosis (DKA) is a common and lethal complication of diabetes. The American Diabetes Association recommends treating adult patients with a bolus dose of regular insulin followed by a continuous insulin infusion. The ADA also suggests a glucose correction rate of 75-100 mg/dl/hr to minimize complications. Objectives: Compare the effect of bolus dose insulin therapy with insulin infusion to insulin infusion alone on serum glucose, bicarbonate, and pH in the initial treatment of DKA. Methods: Consecutive DKA patients were screened in the ED between March '06 and June '10. Inclusion criteria were: age >18 years, glucose >350 mg/dL, serum bicarbonate 15 or ketonemia or ketonuria. Exclusion criteria were: congestive heart failure, current hemodialysis, pregnancy, or inability to consent. No patient was enrolled more than once. Patients were randomized to receive either regular insulin 0.1 units/kg or the same volume of normal saline. Patients, medical and research staff were blinded. Baseline glucose, electrolytes, and venous blood gases were collected on arrival. Bolus insulin or placebo was then administered and all enrolled patients received regular insulin at rate of 0.1 unit/kg/hr, as well as fluid and potassium repletion per the research protocol. Glucose, electrolytes, and venous blood gases were drawn hourly for 4 hours. Data between two groups were compared using unpaired t-test. Results: 99 patients were enrolled, with 30 being excluded. 35 patients received bolus insulin; 34 received placebo. No significant differences were noted in initial glucose, pH, bicarbonate, age, or weight between the two groups. After the first hour, glucose levels in the insulin group decreased by 151 mg/dL compared to 94 mg/dL in the placebo group (p = 0.0391, 95% CI 2.7 to 102.0). Changes in mean glucose levels, pH, bicarbonate level, and AG were not statistically different between the two groups for the remainder of the 4 hour study period. There was no difference in the incidence of hypoglycemia in the two groups. Conclusion: Administering a bolus dose of regular insulin decreased mean glucose levels more than placebo, although only for the first hour. There was no difference in the change in pH, serum bicarbonate or anion gap at any interval. This suggests that bolus dose insulin may not add significant benefit in the emergency management of DKA. IHCA; 3. Return of spontaneous circulation (RSOC). Traumatic cardiac arrests were excluded. We recorded baseline demographics, arrest event characteristics, follow-up vitals and laboratory data, and in-hospital mortality. APACHE II scores were calculated at the time of ROSC, and at 24 hrs, 48 hrs, and 72 hrs. We used simple descriptive statistics to describe the study population. Univariate logistic regression was used to predict mortality with APACHE II as a continuous predictor variable. Discrimination of APACHE II scores was assessed using the area under the curve (AUC) of the receiver operator characteristic (ROC) curve. Results: A total of 229 patients were analyzed. The median age was 70 years (IQR: 56-79) and 32% were female. APACHE II score was a significant predictor of mortality for both OHCA and IHCA at baseline and at all follow-up time points (all p < 0.01). Discrimination of the score increased over time and achieved very good discrimination after 24 hrs (Table, Figure) . Conclusion: The ability of APACHE II score to predict mortality improves over time in the 72 hours following cardiac arrest. These data suggest that after 24 hours, APACHE II scoring is a useful severity of illness score in all post-cardiac arrest patients. Background: Admission hyperglycemia has been described as a mortality risk factor for septic non-diabetics, but the known association of hyperglycemia with hyperlactatemia (a validated mortality risk factor in sepsis) has not previously been accounted for. Objectives: To determine whether the association of hyperglycemia with mortality remains significant when adjusted for concurrent hyperlactatemia. Methods: This was a post-hoc, nested analysis of a single-center cohort study. Providers identified study subjects during their ED encounters; all data were collected from the electronic medical record. Patients: Nondiabetic adult ED patients with a provider-suspected infection, two or more Systemic Inflammatory Response Syndrome criteria, and concurrent lactate and glucose testing in the ED. Setting: The ED of an urban teaching hospital; 2007 to 2009. Analysis: To evaluate the association of hyperglycemia (glucose >200 mg/dL) with hyperlactatemia (lactate ‡ 4.0 mmol/L), a logistic regression model was created; outcome-hyperlactatemia; primary variable of interest-hyperglycemia. A second model was created to determine if concurrent hyperlactatemia affects hyperglycemia's association with mortality; outcome-28-day mortality; primary risk variablehyperglycemia with an interaction term for concurrent hyperlactatemia. Both models were adjusted for demographics, comorbidities, presenting infectious syndrome, and objective evidence of renal, respiratory, hematologic, or cardiovascular dysfunction. Results: 1236 ED patients were included; mean age 76 ± 19 years. 133 (9%) subjects were hyperglycemic, 182 (13%) hyperlactatemic, and 225 (16%) died within 28 days of the initial ED visit. After adjustment, hyperglycemia was significantly associated with simultaneous hyperlactatemia (OR 3.9, 95%CI 2.48, 5.98). Hyperglycemia with concurrent hyperlactatemia was associated with increased mortality risk (OR 4.4, 95%CI 2.27, 8.59) , but hyperglycemia in the absence of simultaneous hyperlactatemia was not (OR 0.86, 95%CI 0.45, 1.65) . Conclusion: In this cohort of septic adult non-diabetic patients, mortality risk did not increase with hyperglycemia unless associated with simultaneous hyperlactatemia. The previously reported association of hyperglycemia with mortality in this population may be due to the association of hyperglycemia with hyperlactatemia. The Background: Near infrared spectroscopy (StO2) represents a measure of perfusion that provides the treating physician with an assessment of a patient's shock state and response to therapy. It has been shown to correlate with lactate and acid/base status. It is not known if using information from this monitor to guide resuscitation will result in improved patient outcomes. Objectives: To compare the resuscitation of patients in shock when the StO2 monitor is or is not being used to guide resuscitation. Methods: This was a prospective study of patients undergoing resuscitation in the ED for shock from any cause. During alternating 30 day periods, physicians were blinded to the data from the monitor followed by 30 days in which physicians were able to see the information from the StO2 monitor and were instructed to resuscitate patients to a target StO2 value of 75. Adult patients (age>17) with a shock index (SI) of >0.9 (SI = heart rate/systolic blood pressure) or a blood pressure <80 mmHg systolic who underwent resuscitation were enrolled. Patients had a StO 2 monitor placed on the thenar eminence of their least-injured hand. Data from the StO 2 monitor were recorded continuously and noted every minute along with blood pressure, heart rate, and oxygen saturation. All treatments were recorded. Patients' charts were reviewed to determine the diagnosis, ICU-free days in the 28 days after enrollment, inpatient LOS, and 28-day mortality. Data were compared using Wilcoxon rank sum and chi-square tests. Results: 107 patients were enrolled, 51 during blinded periods and 56 during unblinded periods. The median presenting shock index was 1.24 (range 0.5 to 4.0) for the blinded group and 1.10 (0.5-3.3) for the unblinded group (p = 0.13). The median time in department was 70 minutes (range 22-407) for the blinded and 76 minutes (range 11-275) for the unblinded groups (p = 0.99). The median hospital LOS was 1 day (range 0-30) for the blinded group, and 2 days (range 0-23) in the unblinded group (p = 0.63). The mean ICU-free days was 22 ± 9 for the blinded group and 19 ± 11 for the unblinded group (p = 0.26). Among patients where the physician indicated using the StO2 monitor data to guide patient care, the ICU-free days were 21.4 ± 9 for the blinded group and 16.3 ± 12 for the blinded group (p = 0.06). Background: Inducing therapeutic hypothermia (TH) using 4°C IV fluids in resuscitated cardiac arrest patients has been shown to be feasible and effective. Limited research exists assessing the efficiency of this cooling method. Objectives: The objective was to determine an efficient infusion method for keeping fluid close to 4°C upon exiting an IV. It was hypothesized that colder temperatures would be associated with both higher flow rate and insulation of the fluid bag. Methods: Efficiency was studied by assessing change in fluid temperature (0C) during the infusion, under three laboratory conditions. Each condition was performed four times using 1 liter bags of normal saline. Fluid was infused into a 1000 mL beaker through 10 gtts tubing. Flow rate was controlled using a tubing clamp and in-line transducer with a flowmeter, while temperature was continuously monitored in a side port at the terminal end of the IV tubing using a digital thermometer. The three conditions included infusing chilled fluid at a rate of 40 mL/min, which is equivalent to 30 mL/kg/hr for an 80 kg patient, 105 mL/min, and 105 mL/min using a chilled and insulated pressure bag. Descriptive statistics and analysis of variance was performed to assess changes in fluid temperature. Results: The average fluid temperatures at time 0 were 3.40 (95% CI 3.12-3.69) (40 mL/min), 3.35 (95% CI 3.25-3.45) (105 mL/min), and 2.92 (95% CI 2.40-3.45) (105 mL/min + insulation). There was no significant difference in starting temperature between groups (p = 0.16). The average fluid temperatures after 100 mL had been infused were 10.02 (95% CI 9.30-10.74) (40 mL/min), 7.35 (95% CI 6.91-7.79) (105 mL/min), and 6.95 (95% CI 6.47-7.43) (105 mL/min + insulation). The higher flow rate groups had significantly lower temperature than the lower flow rate after 100 mL of fluid had been infused (p < 0.001). The average fluid temperatures after 1000 mL had been infused were 16.77 (95% CI 15.96-17.58) (40 mL/min), 11.40 (95% CI 11.18-11.61) (105 mL/min), and 7.75 (95% CI 7.55-7.99) (105 mL/min + insulation). There was a significant difference in temperature between all three groups after 1000 mL of fluid had been infused (p < 0.001). Conclusion: In a laboratory setting, the most efficient method of infusing cold fluid appears to be a method that both keeps the bag of fluid insulated and is infused at a faster rate. fluid bolus. Patients were categorized by presence of vasoplegic or tissue dysoxic shock. Demographics and sequential organ failure assessment (SOFA) scores were evaluated between the groups. The primary outcome was in-hospital mortality. Data were analyzed using t-tests, chi-squared test, and proportion differences with 95% confidence intervals as appropriate. Results: A total of 242 patients were included: 89 patients with vasoplegic shock and 153 with tissue dysoxic shock. There were no significant differences in age (61 vs. 58 years), Caucasian race (53% vs. 58%), or male sex (57% vs. 52%) between the dysoxic shock and vasoplegic shock groups, respectively. The group with vasoplegic shock had a lower initial SOFA score than did the group with tissue dysoxic shock (5.7 vs. 7.3 points, p = 0.0002). The primary outcome of in-hospital mortality occurred in 8/89 (9%) of patients with vasoplegic shock compared to 40/153 (26%) in the group with tissue dysoxic shock (proportion difference 17%, 95% CI 7-26%, p < 0.0001). Conclusion: In this analysis of patients with septic shock, we found a significant difference in in-hospital mortality between patients with vasoplegic versus tissue dysoxic septic shock. These findings suggest a need to consider these differences when designing future studies of septic shock therapies. Background: The PRE-SHOCK population, ED sepsis patients with tissue hypoperfusion (lactate of 2.0-3.9 mM), commonly deteriorates after admission and requires transfer to critical care. Objectives: To determine the physiologic parameters and disease severity indices in the ED PRE-SHOCK sepsis population that predict clinical deterioration. We hypothesized that neither initial physiologic parameters nor organ function scores will be predictive. Methods: Design: Retrospective analysis of a prospectively maintained registry of sepsis patients with lactate measurements. Setting: An urban, academic medical center. Participants: The PRE-SHOCK population, defined as adult ED sepsis patients with either elevated lactate (2.0-3.9 mM) or transient hypotension (any sBP <90 mmHg) receiving IV antibiotics and admitted to a medical floor. Consecutive patients meeting PRE-SHOCK criteria were enrolled over a 1-year period. Patients with overt shock in the ED, pregnancy, or acute trauma were excluded. Outcome: Primary patientcentered outcome of increased organ failure (sequential organ failure assessment [SOFA] score increase >1 point, mechanical ventilation, or vasopressor utilization) within 72 hours of admission or in-hospital mortality. Results: We identified 248 PRE-SHOCK patients from 2649 screened. The primary outcome was met in 54% of the cohort and 44% were transferred to the ICU from a medical floor. Patients meeting the outcome of increased organ failure had a greater Shock Index (1.02 vs 0.93, p = 0.042) and heart rate (115 vs 105, p < 0.001) with no difference in initial lactate, age, MAP, or exposure to hypotension (sBP <100 mmHg). There was no difference in the Predisposition, Infection, Response, and Organ dysfunction (PIRO) score between groups (6.4 vs 5.7, p = 0.052). Outcome patients had similar initial levels of organ dysfunction but had higher SOFA scores at 24, 48, and 72 hours, a higher ICU transfer rate (60 vs 24%, p < 0.001), and increased ICU and hospital lengths of stay. Conclusion: The PRE-SHOCK sepsis population has a high incidence of clinical deterioration, progressive organ failure, and ICU transfer. Physiologic data in the ED were unable to differentiate the PRE-SHOCK sepsis patients who developed increased organ failure. This study supports the need for an objective organ failure assessment in the emergency department to supplement clinical decision-making. Background: Lipopolysaccharide (LPS) has long been recognized to initiate the host inflammatory response to infection with gram negative bacteria (GNB). Large clinical trials of potentially very expensive therapies continue to have the objective of reducing circulating LPS. Previous studies have found varying prevalence of LPS in blood of patients with severe sepsis. Compared with sepsis trials conducted 20 years ago, the frequency of GNB in culture specimens from emergency department (ED) patients enrolled in clinical trials of severe sepsis has decreased. Objectives: Test the hypothesis that prior to antibiotic administration, circulating LPS can be detected in the plasma of fewer than 10% of ED patients with severe sepsis. Methods: Secondary analysis of a prospective EDbased RCT of early quantitative resuscitation for severe sepsis. Blood specimens were drawn at the time severe sepsis was recognized, defined as two or more systemic inflammatory criteria and a serum lactate >4 mM or SPB<90 mmHg after fluid challenge. Blood was drawn in EDTA prior to antibiotic administration or within the first several hours, immediately centrifuged, and plasma frozen at )80°C. Plasma LPS was quantified using the limulus amebocyte lysate assay (LAL) by a technician blinded to all clinical data. Results: 180 patients were enrolled with 140 plasma samples available for testing. Median age was 59 ± 17 years, 50% female, with overall mortality of 18%. Forty of 140 patients (29%) had any culture specimen positive for GNB including 21 (15%) with blood cultures positive. Only five specimens had detectable LPS, including two with a GNB-positive culture specimen, and three were LPS-positive without GNB in any culture. Prevalence of detectable LPS was 3.5% (CI: 1.5%-8.1%). The frequency of detectable LPS in antibiotic-naive plasma is too low to serve as a useful diagnostic test or therapeutic target in ED patients with severe sepsis. The data raise the question of whether post-antibiotic plasma may have a higher frequency of detectable LPS. Background: EGDT is known to reduce mortality in septic patients. There is no evidence to date that delineates the role of using a risk stratification tool, such as the Mortality in Emergency Department Sepsis (MEDS) score, to determine which subgroups of patients may have a greater benefit with EGDT. Objectives: Our objective was to determine if our EGDT protocol differentially affects mortality based on the severity of illness using MEDS score. Methods: This study is a retrospective chart review of 243 patients, conducted at an urban tertiary care center, after implementing an EGDT protocol on July 1, 2008 (Figure) . This study compares in-hospital mortality, length of stay (LOS) in ICU, and LOS in ED between the control group (126 patients from 1/1/07-12/31/07) and the postimplementation group (117 patients from 7/1/08-6/ 30/09), using MEDS score as a risk stratification tool. Inclusion criteria: patients who presented to our ED with a suspected infection, and two or more SIRS criteria, a MAP<65 mmHg, a SBP< 90 mmol/L. Exclusion criteria: age<18, death on arrival to ED, DNR or DNI, emergent surgical intervention, or those with an acute myocardial infarction or CHF exacerbation. A two-sample t-test was used to show that the mean age and number of comorbidities was similar between the control and study groups (p = 0.27 and 0.87 respectively). Mortality was compared and adjusted for MEDS score using logistic regression. The odds ratios and predicted probabilities of death are generated using the fitted logistic regression model. ED and ICU LOS were compared using Mood's median test. Results: When controlling for illness severity using MEDS score, the relative risk (RR) of death with EGDT is about half that of the control group (RR = 0.52, 95% CI [0.278-0.973], p=0.04). Also, by applying MEDS score to risk stratify patients into various groups of illness severity, we found no specific groups where EGDT is more efficacious at reducing the Predicted Probability of death (Table 1) . Without controlling for MEDS score, there is a trend in reduction of absolute mortality by 9.7% when EGDT is used (control = 30.2%, study = 20.5%, p = 0.086). EGDT leads to a 40.3% reduction in the median LOS in ICU (control = 124 hours, study = 74 hours, p = 0.03), without increasing LOS in ED (control = 6 hours, study = 7 hours, p = 0.50). Conclusion: EGDT is beneficial in patients with severe sepsis or septic shock, regardless of their MEDS score. Background: In patients experiencing acute coronary syndrome (ACS), prompt diagnosis is critical in achieving the best health outcome. While ECG analysis is usually sufficient to diagnose ACS in cases of ST elevation, ACS without ST elevation is reliably diagnosed through serial testing of cardiac troponin I (cTnI). Pointof-care testing (POCT) for cTnI by venipuncture has been proven a more rapid means to diagnosis than central laboratory testing. Implementing fingerstick testing for cTnI in place of standard venipuncture methods would allow for faster and easier procurement of patients' cTnI levels, as well as increase the likelihood of starting a rapid test for cTnI in the prehospital setting, which could allow for even earlier diagnosis of ACS. Objectives: To determine if fingerstick blood samples yield accurate and reliable troponin measurements compared to conventional venous blood draws using the i-STAT POC device. Methods: This experimental study was performed in the ED of a quaternary care suburban medical center between June-August 2011. Fingerstick blood samples were obtained from adult ED patients for whom standard (venipuncture) POC troponin testing was ordered. The time between fingerstick and standard draws was kept as narrow as possible. cTnI assays were performed at the bedside using the i-STAT 1 (Abbott Point of Care). Results: 94 samples from 87 patients were analyzed by both fingerstick and standard ED POCT methods (see Table) . Four resulted in cartridge error. Compared to ''gold standard'' ED POCT, fingerstick testing has a positive predictive value of 100%, negative predictive value of 96%, sensitivity of 79%, and specificity of 100%. No significant difference in cTnI level was found between the two methods, with a nonparametric intraclass correlation coefficient of 0.994 (95% CI 0.992-0.996, p-value < 0.001). Conclusion: Whole blood fingerstick cTnI testing using the i-STAT device is suitable for rapid evaluation of cTnI level in prehospital and ED settings. However, results must be interpreted with caution if they are within a narrow territory of the cutoff for normal vs. elevated levels. Additional testing on a larger sample would be beneficial. The practicality and clinical benefit of using fingerstick cTnI testing in the EMS setting must still be assessed. Background: Adjudication of diagnosis of acute myocardial infarction (AMI) in clinical studies typically occurs at each site of subject enrollment (local) or by experts at an independent site (central). From 2000 From -2007 , the troponin (cTn) element of the diagnosis was predicated on the local laboratories, using a mix of the 99th percentile reference cTn and ROC-determined cutpoints. In 2007, the universal definition of AMI (UD-AMI) defined it by the 99th percentile reference alone. Objectives: To compare the diagnosis rates of AMI as determined by local adjudication vs. central adjudication using UDAMI criteria. Methods: Retrospective analysis of data from the Myeloperoxidase in the Diagnosis of Acute Coronary Syndromes (ACS) Study (MIDAS), an 18-center prospective study with enrollment from 12/19/06 to 9/20/07 of patients with suspected ACS presenting to the ED < 8 hours after symptom onset and in whom serial cTn and objective cardiac perfusion testing was planned. Adjudication of ACS was done by single local principal investigators using clinical data and local cTn cutpoints from 13 different cTn assays, and applying the 2000 definition. Central adjudication was done after completion of the MIDAS primary analysis using the same data and local cTn assay, but by experts at three different institutions, using the UDAMI and the manufacturer's 99th percentile cTn cutpoint, and not blinded to local adjudications. Discrepant dignoses were resolved by consensus. Local vs. central cTn cutpoints differed for six assays, with central cutpoints lower in all. Statistics were by chi-square and kappa. Results: Excluding 11 cases deemed indeterminate by central adjudication, 1096 cases were successfully adjudicated. Local adjudication resulted in 104 AMI (9.5% of total) and 992 non-AMI; central adjudication resulted in 134 (12.2%) AMI and 962 non-AMI. Overall, 44 local diagnoses (4%) were either changed from non-AMI to AMI or AMI to non-AMI (p < 0.001). Interrater reliability across both methods was found to be kappa = 0.79 (p < 0.001). For ACS diagnosis, local adjudication identified 252 ACS cases (23%) and 854 non-ACS, while central adjudication identified 275 ACS (25%) and 831 non-ACS. Overall, 61 local diagnoses (6%) were either changed from non-ACS to ACS or ACS to non-ACS (p < 0 .001). Interrater reliability found kappa = 0.85 (p < 0.001). Conclusion: Central and local adjudication resulted in significantly different rates of AMI and ACS diagnosis. However, overall agreement of the two methods across these two diagnoses was acceptable. occur four times more often in cocaine users. Biomarkers myeloperoxidase (MPO) and C-reactive protein (CRP) have potential in the diagnosis of ACS. Objectives: To evaluate the utility of MPO and CRP in the diagnosis of ACS in patients presenting to the ED with cocaine-associated chest pain and compare the predictive value to nonusers. We hypothesized that these markers may be more sensitive for ACS in nonusers given the underlying pathophysiology of enhanced plaque inflammation. Methods: A secondary analysis of a cohort study of enrolled ED patients who received evaluation for ACS at an urban, tertiary care hospital. Structured data collection at presentation included demographics, chest pain history, lab, and ECG data. Subjects included those with self-reported or lab-confirmed cocaine use and chest pain. They were matched to controls based on age, sex, and race. Our main outcome was diagnosis of ACS at index visit. We determined median MPO and CRP values, calculated maximal AUC for ROC curves, and found cut-points to maximize sensitivity and specificity. Data are presented with 95% CI. Results: Overall, 95 patients in the cocaine positivegroup and 86 patients in the nonusers group had MPO and CRP levels measured. Patients had a median age of 47 (IQR, (40) (41) (42) (43) (44) (45) (46) (47) (48) (49) (50) (51) (52) , 90% black or African American, and 62% male (p > 0.05 between groups). Fifteen patients were diagnosed with ACS: 8 patients in the cocaine group and 7 in the nonusers group. Comparing cocaine users to nonusers, there was no difference in MPO (median 162 [IQR, ] v 136 ng/mL; p = 0.78) or CRP (3 [1] [2] [3] [4] [5] [6] [7] [8] [9] v 5 [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] mg/L; p = 0.08). The AUC for MPO was 0.65 (95% CI 0.39-0.90) v 0.54 (95% CI 0.19-0.73). The optimal cut-point to maximize sensitivity and specificity was 242 ng/mL which gave a sensitivity of 0.42 and specificity of 0.75. Using this cutpoint, 57% v 29% of ACS in cocaine users vs the nonusers would be identified. The AUC for CRP was 0.63 (95% CI 0.39-0.88) in cocaine users vs 0.73 (95% CI 0.52-0.95) in nonusers. The optimal cut point was 11.9 mg/L with a sensitivity of 0.67 and specificity of 0.79. Using this cutpoint, 43% v 88% of ACS in cocaine users and nonusers would have been identified. Conclusion: The diagnostic accuracy of MPO and CRP is not different in cocaine users than nonusers and does not appear to have sufficient discriminatory ability in either cohort. Results: 18 hrs of moderate PE caused a significant decrease in RV heart function in rats treated with the solvent for BAY 41-8543: peak systolic pressure (PSP) decreased from 39 ± 1.5 mmHg, control to 16 ± 1.5, PE, +dP/dt decreased from 1192 ± 93 mmHg/sec to 463 ± 77, -dP/dt decreased from )576 ± 60 mmHg/sec to )251 ± 9. Treatment of rats with BAY 41-8543 significantly improved all three indices of RV heart function (PSP 29 ± 2.6, +dP/dt 1109 ± 116, -dP/dt )426 ± 69). 5 hrs of severe PE also caused significant RV dysfunction (PSP 25 ± 2, -dP/dt )356 ± 28) and treatment with BAY 41-8543 produced protection of RV heart function (PSP 34 ± 2, -dP/dt )535 ± 41) similar to the 18 hr moderate PE model. Conclusion: Experimental PE produced significant RV dysfunction, which was ameliorated by treatment of the animals with the soluble guanylate cyclase stimulator, BAY 41-8543. 1 Hospital of the University of Pennsylvania, Philadelphia, PA; 2 Cooper University Hospital, Camden, NJ Background: Patients who present to the ED with symptoms of potential acute coronary syndrome (ACS) can be safely discharged home after a negative coronary computerized tomographic angiography (CTA). However, the duration of time for which a negative coronary CTA can be used to inform decision making when patients have recurrent symptoms is unknown. Objectives: We examined patients who received more than one coronary CTA for evaluation of ACS to determine whether they had disease progression, as defined by crossing the threshold from noncritical (<50% maximal stenosis) to potentially critical disease. Methods: We performed a structured comprehensive record search of all coronary CTAs performed from 2005 to 2010 at a tertiary care health system. Low-tointermediate risk ED patients who received two or more coronary CTAs, at least one from an ED evaluation for potential ACS, were identified. Patients who were revascularized between scans were excluded. We collected demographic data, clinical course, time between scans, and number of ED visits between scans. Record review was structured and done by trained abstractors. Our main outcome was progression of coronary stenosis between scans, specifically crossing the threshold from noncritical to potentially critical disease. Results: Overall, 32 patients met study criteria (median age 45, interquartile range [IQR] (37.5-48); 56% female; 88% black). The median time between studies was 27.3 months (IQR, . 22 patients did not have stenosis in any vessel on either coronary CTA, two studies showed increasing stenosis of <20%, and the rest showed ''improvement,'' most due to better imaging quality. No patient initially below the 50% threshold subsequently exceeded it (0%; 95% CI, 0-11.0%). Patients also had varying numbers of ED visits (median number of visits 5, range 0-23), and numbers of ED visits for potentially cardiac complaints (median 1, range 0-6); 10 were re-admitted for potentially cardiac complaints (for example, chest pain or shortness of breath), and 9 received further provocative cardiac testing, all of which had negative results. Conclusion: We did not find clinically significant disease progression within a 2 year time frame in patients who had a negative coronary CTA, despite a high number of repeat visits. This suggests that prior negative coronary CTA may be able to be used to inform decision making within this time period. 42.7-48.6) compared to non TRO CT patients. There was no significant difference in image quality between TRO CT images and those of dedicated CT scans in any studies performing this comparison. Similarly, there was no significant difference between TRO CT and other diagnostic modalities in regards to length of stay or admission rate. When compared to conventional coronary angiography as the gold standard for evaluation of CAD, TRO CT had the following pooled diagnostic accuracy estimates: sensitivity 0.94 Conclusion: TRO chest CT is comparable to dedicated PE, coronary, or AD CT in regard to image quality, length of stay, and admission rate and is highly accurate for detecting CAD. The utility of TRO CT depends on the relative pre-test probabilities of the conditions being assessed and its role is yet to be clearly defined. TRO CT, however, involves increased radiation exposure and contrast volume and for this reason clinicians should be selective in its use. Background: Coronary computed tomographic angiography (CCTA) has high sensitivity, specificity, accuracy, and prognostic value for coronary artery disease (CAD) and ACS. However, how a CCTA informs subsequent use of prescription medication is unclear. Objectives: To determine if detection of critical or noncritical CAD on CCTA is associated with initiation of aspirin and statins for patients who presented to the ED with chest pain. We hypothesized that aspirin and statins would be more likely to be prescribed to patients with noncritical disease relative to those without any CAD. Methods: Prospective cohort study of patients who received CCTA as part of evaluation of chest pain in the ED or observation unit. Patients were contacted and medical records were reviewed to obtain clinical follow-up for up to the year after CCTA. The main outcome was new prescription of aspirin or statin. CAD severity on CCTA was graded as absent, mild (1% to 49%), moderate (50% to 69%), or severe ( ‡70%) stenosis. Logistic regression was used to assess the association of stenosis severity to new medication prescription; covariates were determined a priori. Results: 859 patients who had CCTA performed consented to participate in this study or met waiver of consent for record review only (median age, , 59% female, 71% black). Median follow-up time was 333 days, IQR 70-725 days. At baseline, 13% of the total cohort was already prescribed aspirin and 8% on statin medication. Two hundred seventy nine (32%) patients were found to have stenosis in at least one vessel. In patients with absent, mild, moderate, and severe CAD on CCTA, aspirin was initiated in 11%, 34%, 52%, and 55%; statins were initiated in 7%, 22%, 32%, and 53% of patients. After adjustment for age, race, sex, hypertension, diabetes, cholesterol, tobacco use, and admission to the hospital after CCTA, higher grades of CAD severity were independently associated with greater post-CCTA use of aspirin (OR 1.9 per grade, 95% CI 1.4-2.2, p < 0.001) and statins (OR 1.9, 95% CI 1.5-2.4, p < 0.001). Conclusion: Greater CAD severity on CCTA is associated with increased medication prescription for CAD. Patients with noncritical disease are more likely than patients without any disease to receive aspirin and statins. Future studies should examine whether these changes lead to decreased hospitalizations and improved cardiovascular health. Background: Hess et al. developed a clinical decision rule for patients with acute chest pain consisting of the absence of five predictors: ischemic ECG changes not known to be old, elevated initial or 6-hour troponin level, known coronary disease, ''typical'' pain, and age over 50. Patients less than 40 required only a single troponin evaluation. Objectives: To test the hypothesis that patients less than 40 years old without these criteria are at <1% risk for major adverse cardiovascular events (MACE) including death, AMI, PCI, and CABG. Methods: We performed a secondary analysis of several combined prospective cohort studies that enrolled ED patients who received an evaluation for ACS in an urban ED from 1999 to 2009. Cocaine users and STEMI patients were excluded. Structured data collection at presentation included demographics, pain description, history, lab, and ECG data for all studies. Hospital course was followed daily. Thirty-day follow up was done by telephone. Our main outcome was 30-day MACE using objective criteria. The secondary outcome was potential change in ED disposition due to application of the rule. Descriptive statistics and 95% CIs were used. Results: Of 9289 visits for potential ACS, patients had a mean age of 52.4 ± 14.7 yrs; 68% were black and 59% female. There were 638 patients (6.9%) with 30-day CV events (93 dead, 384 AMI, 298 PCI). Sequential removal of patients in order to meet the final rule for patients less than 40 excluded patients based upon: ischemic ECG changes not old (n = 434, 30% MACE rate), elevated initial troponin level (n = 237, 60% MACE), known coronary disease (n = 1622, 11% MACE), ''typical'' pain (n = 3179, 3% MACE), and age over 40 (n = 2690, 3.4% MACE) leaving 1127 patients less than 40 with 0.8% MACE [95% CI, 0.4-1.5%]. Of this cohort, 70% were discharged home from the ED by the treating physician without application of this rule. Adding a second negative troponin in patients 40-50 years old identified a group of 1139 patients with a 2.0% rate of MACE [1.3-3 .0] and a 48% discharge rate. The Hess rule appears to identify a cohort of patients at approximately 1% risk of 30-day MACE, and may enhance discharge of young patients. However, even without application of this rule, the 70% of young patients at low risk are already being discharged home based upon clinical judgment. Background: A Clinical Decision Support System (CDSS) incorporates evidence-based medicine into clinical practice, but this technology is underutilized in the ED. A CDSS can be integrated directly into an electronic medical record (EMR) to improve physician efficiency and ease of use. The Christopher study investigators validated a clinical decision rule for patients with suspected pulmonary embolism (PE). The rule stratifies patients using Wells' criteria to undergo either D-dimer testing or a CT angiogram (CT). The effect of this decision rule, integrated as a CDSS into the EMR, on ordering CTs has not been studied. Objectives: To assess the effect of a mandatory CDSS on the ordering of D-dimers and CTs for patients with suspected PE. Methods: We assessed the number of CTs ordered for patients with suspected PE before and after integrating a mandatory CDSS in an urban community ED. Physicians were educated regarding CDSS use prior to implementation. The CDSS advised physicians as to whether a negative D-dimer alone excluded PE or if a CT was required based on Wells' criteria. The EMR required physicians to complete the CDSS prior to ordering the CT. However, physicians maintained the ability to order a CT regardless of the CDSS recommendation. Patients ‡18 years of age presenting to the ED with a chief complaint of chest pain, dyspnea, syncope, or palpitations were included in the data analysis. We compared the proportion of D-dimers and CTs ordered during the 8-month periods immediately before and after implementing the CDSS. All 27 physicians who worked in the ED during both time periods were included in the analysis. Patients with an allergy to intravenous contrast agents, renal insufficiency, or pregnancy were excluded. Results were analyzed using a chi-square test. Results: A total of 11,931 patients were included in the data analysis (6054 pre-and 5877 post-implementation). CTs were ordered for 215 patients (3.6%) in the pre-implementation group and 226 patients (3.8%) in the post-implementation group; p = 0.396. A D-dimer was ordered for 392 patients (6.5%) in the pre-implementation group and 382 patients (6.5%) in the post-implementation group; p = 0.958. In this single-center study, EMR integration of a mandatory CDSS for evaluation of PE did not significantly alter ordering patterns of CTs and D-Dimers. Identification of Patients with Low-Risk Pulmonary Emboli Suitable for Discharge from the Emergency Department Mike Zimmer, Keith E. Kocher University of Michigan, Ann Arbor, MI Background: Recent data, including a large, multicenter randomized controlled trial, suggest that a low-risk cohort of patients diagnosed with pulmonary embolism (PE) exists who can be safely discharged from the ED for outpatient treatment. Objectives: To determine if there is a similar cohort at our institution who have a low rate of complications from PE suitable for outpatient treatment. Methods: This was a retrospective chart review at a single academic tertiary referral center with an annual ED volume of 80,000 patients. All adult ED patients who were diagnosed with PE during a 24-month period from 11/1/09 through 10/31/11 were identified. The Pulmonary Embolism Severity Index (PESI) score, a previously validated clinical decision rule to risk stratify patients with PE, was calculated. Patients with high PESI (>85) were excluded. Additional exclusion criteria included patients who were at high risk of complications from initiation of therapeutic anticoagulation and those patients with other clear indications for admission to the hospital. The remaining cohort of patients with low risk PE (PESI £ 85) was included in the final analysis. Outcomes were measured at 14 and 90 days after PE diagnosis and included death, major bleeding, and objectively confirmed recurrent venous thromboembolism (VTE). Results: During the study period, 298 total patients were diagnosed with PE. There were 172 (58%) patients categorized as ''low risk'' (PESI £ 85), with 42 removed because of various pre-defined exclusion criteria. Of the remaining 130 (44%) patients suitable for outpatient treatment, 5 patients (3.8%; 95% CI, 0.5% -7.2%) had one or more negative outcomes by 90 days. This included 2 (1.5%; 95% CI, 0% -3.7%) major bleeding events, 2 (1.5%; 95% CI, 0% -3.7%) recurrent VTE, and 2 (1.5%; 95% CI, 0% -3.7%) deaths. None of the deaths were attributable to PE or anticoagulation. One patient suffered both a recurrent VTE and died within 90 days. Both patients who died within 90 days were transitioned to hospice care because of worsening metastatic burden. At 14 days, there was 1 bleeding event (0.8%; 95% CI, 0% -2.3%), no recurrent VTE, and no deaths. The average hospital length of stay for these patients was 2.8 days (SD ±1.6). Conclusion: Over 40% of our patients diagnosed with PE in the ED may have been suitable for outpatient treatment, with 4% suffering a negative outcome within 90 days and 0.8% suffering a negative outcome within 14 days. In addition, the average hospital length of stay for these patients was 2.8 days, which may represent a potential cost savings if these patients had been managed as outpatients. Our experience supports previous studies that suggest the safety of outpatient treatment of patients diagnosed with PE in the ED. Given the potential savings related to a decreased need for hospitalization, these results have health policy implications and support the feasibility of creating protocols to facilitate this clinical practice change. Background: Chest x-rays (CXRs) are commonly obtained on ED chest pain patients presenting with suspected acute coronary syndrome (ACS). A recently derived clinical decision rule (CDR) determined that patients who have no history of congestive heart failure, have never smoked, and have a normal lung examination do not require a CXR in the ED. Objectives: To validate the diagnostic accuracy of the Hess CXR CDR for ED chest pain patients with suspected ACS. Methods: This was a prospective observational study of a convenience sample of chest pain patients over 24 years old with suspected ACS who presented to a single urban academic ED. The primary outcome was the ability of the CDR to identify patients with abnormalities on CXR requiring acute ED intervention. Data were collected by research associates using the chart and physician interviews. Abnormalities on CXR and specific interventions were predetermined, with a positive CXR defined as one with abnormality requiring ED intervention, and a negative CXR defined as either normal or abnormal but not requiring ED intervention. The final radiologist report was used as a reference standard for CXR interpretation. A second radiologist, blinded to the initial radiologist's report, reviewed the CXRs of patients meeting the CDR criteria to calculate inter-observer agreement. Patients were followed up by chart review and telephone interview 30 days after presentation. Results: Between January and August 2011, 178 patients were enrolled, of whom 38 (21%) were excluded and 10 (5.6%) did not receive CXRs in the ED. Of the 130 remaining patients, 74 (57%) met the CDR. The CDR identified all patients with a positive CXR (sensitivity = 100%, 95%CI 40-100%). The CDR identified 73 of the 126 patients with a negative CXR (specificity = 58%, 95%CI 49-67%). The positive likelihood ratio was 2.4 (95%CI 1.9-2.9). Inter-observer agreement between radiologists was substantial (kappa = 0.63, 95%CI 0.41-0.85). Telephone contact was made with 78% of patients and all patient charts were reviewed at 30 days. None had any adverse events related to a Background: Increasing the threshold to define a positive D-dimer in low-risk patients could reduce unnecessary computed tomographic pulmonary angiography (CTPA) for suspected PE. This strategy might increase rates of missed PE and missed pneumonia, the most common non-thromboembolic finding on CTPA that might not otherwise be diagnosed. Objectives: Measure the effect of doubling the standard D-dimer threshold for ' 'PE unlikely'' Revised Geneva (RGS) or Wells' scores on the exclusion rate, frequency, and size of missed PE and missed pneumonia. Methods: Prospective enrollment at four academic US hospitals. Inclusion criteria required patients to have at least one symptom or sign and one risk factor for PE, and have 64-channel CTPA completed. Pretest probability data were collected in real time and the D-dimer was measured in a central laboratory. Criterion standard for PE or pneumonia consisted of CPTA interpretation by two independent radiologists combined with necessary treatment plan. Subsegmental PE was defined as total vascular obstruction <5%. Patients were followed for outcome at 30 days. Proportions were compared with 95% CIs. Results: Of 678 patients enrolled, 126 (19%) were PE+ and 93 (14%) had pneumonia. With RGS£6 and standard threshold (<500 ng/mL), D-dimer was negative in 110/678 (16%, 95% CI: 13-19%), and 4/110 were PE+ (posterior probability 3.8%, 95% CI: 1-9.3%). With RGS£6 and a threshold <1000 ng/mL, D-dimer was negative in 208/678 (31%, 27-44%) and 11/208 (5.3%, 2.8-9.3%) were PE+, but 10/11 missed PEs were subsegmental, and none had concomitant DVT. The posterior probability for pneumonia among patients with RGS≤6 and D-dimer<500 was 9/110 (8.2%, 4-15%) which compares favorably to the posterior probability of 12/208 (5.4%, 3-10%) observed with RGS& #8804;6 and D-dimer<1000 ng/mL. Of the 200 (35%) patients who also had plain film CXR, radiologists found an infiltrate in only 58. Use of Wells£4 produced similar results as the RGS≤6 for exclusion rate and posterior probability of both PE and pneumonia. Conclusion: Doubling the threshold for a positive D-dimer with a PE unlikely pretest probability can significantly reduce CTPA scanning with a slightly increased risk of missed isolated subsegmental PE, and no increase in rate of missed pneumonia. Background: The limitations of developing world medical infrastructure require that patients are transferred from health clinics only when the patient care needs exceed the level of care at the clinic and the receiving hospital can provide definitive therapy. To determine what type of definitive care service was sought when patients were transferred from a general outpatient clinic operating Monday through Friday from 8:00 AM to 3:00 PM in rural Haiti to urban hospitals in Port-au-Prince. Methods: Design -Prospective observational review of all patients for whom transfer to a hospital was requested or for whom a clinic ambulance was requested to an off-site location to assist with patient care. Setting -Weekday, daytime only clinic in Titanyen, Haiti. Participants/Subjects -Consecutive series of all patients for whom transfer to another health care facility or for whom an ambulance was requested during the time period of 11/22/2010 -12/14/2010 and 3/28/2011 -5/13/2011 . Results: Between 11/22/2010 -12/14/2010 and 3/28/2011 -5/13/2011 patients were identified who needed to be transferred to a higher level of care. Sixteen patients (43.2%) presented with medical complaints, 12 (32.4%) were trauma patients, 6 (16.2%) were surgical, and 3 (8.1%) were in the obstetric category. Within these categories, 6 patients were pediatric and 4 non-trauma patients required blood transfusion. Conclusion: While trauma services are often focused on in rural developing world medicine, the need for obstetric care and blood transfusion constituted six (16.2%) cases in our sample. These patients raise important public health, planning, and policy questions relating to access to prenatal care and the need to better understand transfusion medicine utilization among rural Haitian patients with non-trauma related transfusion needs. The data set is limited by sample size and single location of collection. Another limitation of understanding the needs is that many patients may not present to the clinic for their health care needs in certain situations if they have knowledge that the resources to provide definitive care are unavailable. Background: The practice of emergency medicine in Japan has been unique in that emergency physicians are mostly engaged in critical care and trauma with a multi-specialty model. For the last decade with progress in medicine, an aging population with complicated problems, and institution of postgraduate general clinical training, the US model emergency medicine with single-specialty model has been emerging throughout Japan. However, the current status is unknown. Objectives: The objective of this study was to investigate the current status of implementation of the US model emergency medicine at emergency medicine training institutions accredited by the Japanese Association for Acute Medicine (JAAM). Methods: The ER Committee of the JAAM, the most prestigious professional organization in Japanese emergency medicine, conducted the survey by sending questionnaires to 499 accredited emergency medicine training institutions. Results: Valid responses obtained from 299 facilities were analyzed. US model EM was provided in 211 facilities (71% of 299 facilities), either in full time (24 hours a day, seven days a week; 123 facilities) or in part time (less than 24 hours a day; 88 facilities). Among these 211 US model facilities, 44% have a number of beds between 251-500. The annual number of ED visits was less than 20,000 in 64%, and 37% have ambulance transfers between 2,001-4,000 per year. The number of emergency physicians was less than 5 in 60% of the facilities. Postgraduate general clinical training was offered at US model ED in 199 facilities, and ninety hospitals adopted US model EM after 2004, when a 2-year period of postgraduate general clinical training became mandatory for all medical graduates. Sixty-four facilities provided a residency program to be a US model emergency physician, and another 9 institutions were planning to establish it. Conclusion: US model EM has emerged and become commonplace in Japan. The background including advance in medicine, aging population, and mandatory postgraduate general clinical training system are considered to be contributing factors. Erkan Gunay, Ersin Aksay, Ozge Duman Atilla, Nilay Zorbalar, Savas Sezik Tepecik Research and Training Hospital, Izmir, Turkey Background: Workplace safety and occupational health problems are increasing issues especially in developing countries as a result of the industrial automatisation and technologic improvements. Occupational injuries are preventable but they can occasionally cause morbidity and mortality resulting in work day loss and financial problems. Hand injuries are one-third of all traumatic injuries and are the most injured parts after occupational accidents. Objectives: We aim to evaluate patients with occupational upper extremity injuries for demographic characteristics, injury types, and work day loss. Methods: Trauma patients over 15 years old admitted to our emergency department with an occupational upper extremity injury were prospectively evaluated from 15.04.2010 to 30.04.2011. Patients with one or more of digit, hand, forearm, elbow, humerus, and shoulder injuries were included. Exclusion criteria were multitrauma, patient refusal to participate, and insufficient data. Patients were followed up from the hospital information system and by phone for work day loss and final diagnosis. Results: During the study period there were 570 patients with an occupational upper extremity injury. Total of 521 (91.4%) patients were included. Patients were 92.1% male, 36.5% between the age 25 to 34, and mean age was calculated 32.9 ± 9.6 years. 43.8% of the patients were from the metal and machinery sector, and primary education was the highest education level for the 74.7% of the patients. Most injured parts were fingers with the highest rate for index finger and thumb. Crush injury was the most common injury type. 96.3% (n = 502) of the patients were discharged after treatment in the emergency department. Tendon injuries, open fractures, and high degree burns were the reasons for admission to clinics. Mean work day loss was 12.8 ± 27.2 days and this increases for the patients with laboratory or radiologic studies, consultant evaluation, or admission. The 15-24 age group had a significantly lower work day loss average. Conclusion: Evaluating occupational injury characteristics and risks is essential for identifying preventive measures and actions. With the guidance of this study preventive actions focusing on high-risk sectors and patients may be the key factor for avoiding occupational injuries and creating safer workplace environments in order to reduce financial and public health problems. Background: As emergency medicine (EM) gains increased recognition and interest in the international arena, a growing number of training programs for emergency health care workers have been implemented in the developing world through international partnerships. Objectives: To evaluate the quality and appropriateness of an internationally implemented emergency physician training program in India. Methods: Physicians participating in an internationally implemented EM training program in India were recruited to participate in a program evaluation. A mixed methods design was used including an online anonymous survey and semi-structured focus groups. The survey assessed the research, clinical, and didactic training provided by the program. Demographics and information on past and future career paths were also collected. The focus group discussions centered around program successes and challenges. Results: Fifty of 59 eligible trainees (85%) participated in the survey. Of the respondents, the vast majority were Indian; 16% were female, and all were between the ages of 25 and 45 years (mean age 31 years). All but two trainees (96%) intend to practice EM as a career. One-third listed a high-income country first for preferred practice location and half listed India first. Respondents directly endorsed the program structure and content, and they demonstrated gains in self-rated knowledge and clinical confidence over their years of training. Active challenges identified include: (1) insufficient quantity and inconsistent quality of Indian faculty, (2) administrative barriers to academic priorities, and (3) persistent threat of brain drain if local opportunities are inadequate. Conclusion: Implementing an international emergency physician training program with limited existing local capacity is a challenging endeavor. Overall, this evaluation supports the appropriateness and quality of this partnership model for EM training. One critical challenge is achieving a robust local faculty. Early negotiations are recommended to set educational priorities, which includes assuring access to EM journals. Attrition of graduated trainees to high-income countries due to better compensation or limited in-country opportunities continues to be a threat to long-term local capacity building. Background: With an increasing frequency and intensity of manmade and natural disasters, and a corresponding surge in interest in international emergency medicine (IEM) and global health (GH), the number of IEM and GH fellowships is constantly growing. There are currently 34 IEM and GH fellowships, each with a different curriculum. Several articles have proposed the establishment of core curriculum elements for fellowship training. To the best of our knowledge, no study has examined whether IEM and GH fellows are actually fulfilling these criteria. Objectives: This study sought to examine whether current IEM and GH fellowships are consistently meeting these core curricula. Methods: An electronic survey was administered to current IEM and GH fellowship directors, current fellows, and recent graduates of a total of 34 programs. Survey respondents stated their amount of exposure to previously published core curriculum components: EM System Development, Humanitarian Assistance, Disaster Response, and Public Health. A pooled analysis comparing overall responses of fellows to those of program directors was performed using two-sampled t-test. Results: Response rates were 88% (n = 30) for program directors and 53% (n = 17) for current and recent fellows. Programs varied significantly in terms of their emphasis on and exposure to six proposed core curriculum areas: EM System Development, EM Education Development, Humanitarian Aid, Public Health, EMS, and Disaster Management. Only 43% of programs reported having exposure to all four core areas. As many as 67% of fellows reported knowing their curriculum only somewhat or not at all prior to starting the program. Conclusion: Many fellows enter IEM and GH fellowships without a clear sense of what they will get from their training. As each fellowship program has different areas of curriculum emphasis, we propose not to enforce any single core curriculum. Rather, we suggest the development of a mechanism to allow each fellowship program to present its curriculum in a more transparent manner. This will allow prospective applicants to have a better understanding of the various programs' curricula and areas of emphasis. Background: Advance warning of probable intensive care unit (ICU) admissions could allow the bed placement process to start earlier, decreasing ED length of stay and relieving overcrowding conditions. However, physicians and nurses poorly predict a patient's ultimate disposition from the emergency department at triage. A computerized algorithm can use commonly collected data at triage to accurately identify those who likely will need ICU admission. Objectives: To evaluate an automated computer algorithm at triage to predict ICU admission and 28-day in-hospital mortality. Methods: Retrospective cohort study at a 55,000 visit/ year Level I trauma center/tertiary academic teaching hospital. All patients presenting to the ED between 12/16/2008 and 10/1/2010 were included in the study. The primary outcome measure was ICU admission from the emergency department. The secondary outcome measure was 28-day all-cause in-hospital mortality. Patients discharged or transferred before 28 days were considered to be alive at 28 days. Triage data includes age, sex, acuity (emergency severity index), blood pressure, heart rate, pain scale, respiratory rate, oxygen saturation, temperature, and a nurse's free text assessment. A Latent Dirichlet Allocation algorithm was used to cluster words in triage nurses' free text assessments into 500 topics. The triage assessment for each patient is then represented as a probability distribution over these 500 topics. Logistic regression was then used to determine the prediction function. Results: A total of 94,973 patients were included in the study. 3.8% were admitted to the ICU and 1.3% died within 28 days. These patients were then randomly allocated to train (n = 75,992; 80%) and test (n = 18,981; 20%) data sets. The area under the receiver operating characteristic curve (AUC) when predicting ICU Background: At the 2011 SAEM Annual Meeting, we presented the derivation of two hospital admission prediction models adding coded chief complaint (CCC) data from a published algorithm (Thompson et al. Acad Emerg Med 2006; 13:774-782) to demographic, ED operational, and acuity (Emergency Severity Index (ESI)) data. Objectives: We hypothesized that these models would be validated when applied to a separate retrospective cohort, justifying prospective evaluation. Methods: We conducted a retrospective, observational validation cohort study of all adult ED visits to a single tertiary care center (census: 49,000/yr) (4/1/09-12/31/10). We downloaded from the center's clinical tracking system demographic (age, sex, race), ED operational (time and day of arrival), ESI, and chief complaint data on each visit. We applied the derived CCC hospital admission prediction models (all identified CCC categories and CCC categories with significant odds of admission from multivariable logistic regression in the derivation cohort) to the validation cohort to predict odds of admission and compared to prediction models that consisted of demographic, ED operational, and ESI data, adding each category to subsequent models in a stepwise manner. Model performance is reported by areaunder-the-curve (AUC) data and 95%CI. signs, pain level, triage level, 72-hour return, number of past visits in the previous year, injury, and one of 122 chief complaint codes (representing 90% of all visits in the database). Outputs for training included ordering of a complete blood count, basic chemistry (electrolytes, blood urea nitrogen, creatinine), cardiac enzymes, liver function panel, urinalysis, electrocardiogram, x-ray, computed tomography, or ultrasound. Once trained, it was used on the NHAMCS-ED 2008 database, and predictions were generated. Predictions were compared with documented physician orders. Outcomes included the percent of total patients who were correctly pre-ordered, sensitivity (the percent of patients who had an order that were correctly predicted), and the percent over-ordered. Waiting time for correctly pre-ordered patients was highlighted, to represent a potential reduction in length of stay achieved by preordering. LOS for patients overordered was highlighted to see if over-ordering may cause an increase in LOS for those patients. Unit cost of the test was also highlighted, as taken from the 2011 Medicare fee schedule. physician times. However, during peak ED census times, many patients with completed tests and treatment initiated by triage await discharge by the next assigned physician. Objectives: Determine if a physician-led discharge disposition (DD) team can reduce the ED length of stay (LOS) for patients of similar acuity who are ultimately discharged compared to standard physician team assignment. Methods: This prospective observational study was performed from 10/2010 to 10/2011 at an urban tertiary referral academic hospital with an annual ED volume of 87,000 visits. Only Emergency Severity Index Level 3 patients were evaluated. The DD team was scheduled weekdays from 14:00 until 23:00. Several ED beds were allocated to this team. The team was comprised of one attending physician and either one nurse and a tech or two nurses. Comparisons were made between LOS for discharged patients originally triaged to the main ED side who were seen by the DD team versus the main side teams. Time from triage physician to team physician, team physician to discharge decision time, and patient age were compared by unpaired t-test. Differences were studied for number of patients receiving x-rays, CT scan, labs, and medications. Results: DD team mean LOS in hours for discharged patients was shorter at 3.4 (95% CI: 3.3-3.6, n = 1451) compared to 6.4 (95% CI: 6.3-6.5, n = 4601) on the main side, p < 0.01. The mean time from triage physician to DD team physician was 1.4 hours (95% CI: 1.4-1.5, n = 1447) versus to 2.7 hours (95% CI: 2.7-2.8, n = 4568) to main side physician, p < 0.01. The DD team physician mean time to discharge decision was 1.0 hour (95% CI: 1.0-1.1, n = 1432) compared to 2.5 hours (95% CI: 2.4-2.6, n = 4590) for main side physician, p < 0.01. The DD team patients' mean age was 42.6 years (95% CI: 41.9-43.6, n = 1454) compared to main side patients' mean age of 49.1 years (95% CI: 48.5-49.6, n = 4621.) The DD team patients (n = 1454) received fewer x-rays (40% vs. 59%), CT scans (13% vs. 23%), labs (64% vs. 85%), and medications (63% vs. 68%) than main side patients (n = 4621), p < 0.01 for all compared. Conclusion: The DD team complements the advanced triage process to further reduce LOS for patients who do not require extended ED treatment or observation. The DD team was able to work more efficiently because its patients tended to be younger and had fewer lab and imaging tests ordered by the triage physician compared to patients who were later seen on the ED main side. ED Objectives: To evaluate the association between ED boarding time and the risk of developing HAPU. Methods: We conducted a retrospective cohort study using administrative data from an academic medical center with an adult ED with 55,000 annual patient visits. All patients admitted into the hospital through the ED 6/30/2008-2/28/2011 were included. Development of HAPU was determined using the standardized, national protocol for CMS reporting of HAPU. ED boarding time was defined as the time between an order for inpatient admission and transport of the patient out of the ED to an in-patient unit. We used a multivariate logistic regression model with development of a HAPU as the outcome variable, ED boarding time as the exposure variable, and the following variables as covariates: age, sex, initial Braden score, and admission to an intensive care unit (ICU) from the ED. The Braden score is a scale used to determine a patient's risk for developing a HAPU based on known risk factors. A Braden score is calculated for each hospitalized patient at the time of admission. We included Braden score as a covariate in our model to determine if ED boarding time was a predictor of HAPU independent of Braden Score. Results: Of 46,704 patients admitted to the hospital through the ED during the study period, 243 developed a HAPU during their hospitalization. Clinical characteristics are presented in the table. Per hour of ED boarding time, the adjusted OR of developing a HAPU was 1.02 (95% CI 1.01-1.04, p = 0.007). A median of 40 patients per day were admitted through the ED, accumulating 144 hours of ED boarding time per day, with each hour of boarding time increasing the risk of developing a HAPU by 2%. Conclusion: In this single-center, retrospective study, longer ED boarding time was associated with increased risk of developing a HAPU. queried ED and inpatient nurses and compared their opinions toward inpatient boarding. It also assessed their preferred boarding location if they were patients. Objectives: This study queried ED and inpatient nurses and compared their opinions toward inpatient boarding. Methods: A survey was administered to a convenience sample of ED and ward nurses. It was performed in a 631-bed academic medical center (30,000 admissions/yr) with a 68-bed ED (60,000 visits/yr). Nurses were identified as ED or ward and whether they had previously worked in the ED. The nurses were asked if there were any circumstances where admitted patients should be boarded in the ED or inpatient hallways. They were also asked their preferred location if they were admitted as a patient. Six clinical scenarios were then presented and their opinions on boarding queried. Results: Ninety nurses completed the survey; 35 (39%) were current ED nurses (cED), 40 (44%) had previously worked in the ED (pED). For the entire group 46 (52%) believed admitted patients should board in the ED. Overall, 52 (58%) were opposed to inpatient boarding, with 20% of cED versus 83% of current ward (cW) nurses (P < 0.0001) and 28% of pED versus 85% of nurses never having worked in the ED (nED) opposed (P < 0.001). If admitted as patients themselves, overall 43 (54%) preferred inpatient boarding, with 82% of cED versus 33% of cW nurses (P < 0.0001) and 74% of pED versus 34% nED nurses (P = 0.0007) preferring inpatient boarding. For the six clinical scenarios, significant differences in opinion regarding inpatient boarding existed in all but two cases: a patient with stable COPD but requiring oxygen and an intubated, unstable sepsis patient. Conclusion: Ward nurses and those who have never worked in the ED are more opposed to inpatient boarding than ED nurses and nurses who have worked previously in the ED. Nurses admitted as patients seemed to prefer not being boarded where they work. ED and ward nurses seemed to agree that unstable or potentially unstable patients should remain in the ED. 8 weeks. Staff satisfaction was evaluated through pre/ post-shift and study surveys; administrative data (physician initial assessment (PIA), length of stay (LOS), patients leaving without being seen (LWBS) and against medical advice [LAMA] ) were collected from an electronic, real-time ED information system. Data are presented as proportions and medians with interquartile ranges (IQR); bivariable analyses were performed. Results: ED physicians and nurses expected the intervention to reduce the LOS of discharged patients only. PIA decreased during the intervention period (68 vs 74 minutes; p < 0.001). No statistically/clinically significant differences were observed in the LOS; however, there was a significant reduction in the LWBS (4.7% to 3.5% p = 0.003) and LAMA (0.7% to 0.4% p = 0.028) rates. While there was a reduction of approximately 5 patients seen per physician in the affected ED area, the total number of patients seen on that unit increased by approximately 10 patients/day. Overall, compared to days when there was no extra shift, 61% of emergency physicians stated their workload decreased and 73% felt their stress level at work decreased. Conclusion: While this study didn't demonstrate a reduction in the overall LOS, it did reduce PIA times and the proportion of LWBS/LAMA patients. While physicians saw fewer patients during the intervention study period, the overall patient volume increased and satisfaction among ED physicians was rated higher. Provider-and Hospital-Level Variation In Admission Rates And 72-Hour Return Admission Rates Jameel Abualenain 1 , William Frohna 2 , Robert Shesser 1 , Ru Ding 1 , Mark Smith 2 , Jesse M. Pines 1 1 The George Washington University, Washington, DC; 2 Washington Hospital Center, Washington, DC Background: Decisions for inpatient versus outpatient management of ED patients are the most important and costliest decision made by emergency physicians, but there is little published on the variation in the decision to admit among providers or whether there is a relationship between a provider's admission rate and the proportion of their patients who return within 72 hours of the initial visit and are subsequently admitted (72H-RA). Objectives: We explored the variation in provider-level admission rates and 72H-RA rates, and the relationship between the two. Methods: A retrospective study using data from three EDs with the same information system over varying time periods: Washington Hospital Center (WHC) (2008-10), Franklin Square Hospital Center (FSHC) , and Union Memorial Hospital (UMH) . Patients were excluded if left without being seen, left against medical advice, fast-track, psychiatric patients, and aged <18 years. Physicians with <500 ED encounters or an admission rate <15% were excluded. Logistic regression was used to assess the relationship between physician-level 72H-RA and admission rates, adjusting for patient age, sex, race, and hospital. Results: 389,120 ED encounters were treated by 90 physicians. Mean patient age was 50 years SD 20, 42% male, and 61% black. Admission rates differed between hospitals (WHC = 40%, UMH = 37%, and FSHC = 28%), as did the 72H-RA (WHC = 0.9%, UMH = 0.6%, and FSHC = 0.6%). Across all hospitals, there was great variation in individual physician admission rates (15.4%-50.0%). The 72H-RA rates were quite low, but demonstrated a similar magnitude of individual variation (0.3%-1.2%). Physicians with the highest admission rate quintile had lower odds of 72H-RA (OR 0.8 95% CI 0.7-0.9) compared to the lowest admission rate quintile, after adjusting for other factors. No intermediate admission rate quintiles (2nd, 3rd, or 4th) were significantly different from the lowest admission rate quintile with regard to 72H-RA. Conclusion: There is more than three-fold variation in individual physician admission rates indicating great variation among physicians in hospital admission rates and 72H-RA. The highest admitters have the lowest 72H-RA; however, evaluating the causes and consequences of such significant variation needs further exploration, particularly in the context of health reform efforts aimed at reducing costs. Background: ED scribes have become an effective means to assist emergency physicians (EPs) with clinical documentation and improve physician productivity. Scribes have been most often utilized in busy community EDs and their utility and functional integration into an academic medical center with resident physicians is unknown. Objectives: To evaluate resident perceptions of attending physician teaching and interaction after introduction of scribes at an EM residency training program, measured through an online survey. Residents in this study were not working with the scribes directly, but were interacting indirectly through attending physician use of scribes during ED shifts. Methods: An online ten question survey was administered to 31 residents of a Midwest academic emergency medicine residency program (PGY1-PGY3 program, 12 annual residents), 8 months after the introduction of scribes into the ED. Scribes were introduced as EMR documentation support (Epic 2010, Epic Systems Inc.) for attending EPs while evaluating primary patients and supervising resident physicians. Questions investigated EM resident demographics and perceptions of scribes (attending physician interaction and teaching, effect on resident learning, willingness to use scribes in the future), using Likert scale responses (1 minimal, 9 maximum) and a graduated percentage scale used to quantify relative values, where applicable. Data were analyzed using Kruskal-Wallis and Mann-Whitney U tests. Results: Twenty-one of 31 EM residents (68%) completed the survey (81% male; 33% PGY1, 29% PGY2, 38% PGY3). Four residents had prior experience with scribes. Scribes were felt to have no effect on attending EPs direct resident interaction time (mean score 4.5, SD 1.2), time spent bedside teaching (4.8, SD 0.9), or quality of teaching (4.9, SD 0.8), as well as no effect on residents' overall learning process (4.6, SD 1.1). However, residents felt positive about utilizing scribes at their future occupation site (6.0, SD 2.7). No response differences were noted for prior experience, training level, or sex. Conclusion: When scribes are introduced at an EM residency training site, residents of all training levels perceive it as a neutral interaction, when measured in terms of perceived time with attending EPs and quality of the teaching when scribes are present. The Effect of Introduction of an Electronic Medical Record on Resident Productivity in an Academic Emergency Department Shawn London, Christopher Sala University of Connecticut School of Medicine, Farmington, CT Background: There are little available data which describe the effect of implementation of an electronic medical record (EMR) on provider productivity in the emergency department, and no studies which, to our knowledge, address this issue pertaining to housestaff in particular. Objectives: We seek to quantify the changes in provider productivity pre-and post-EMR implementation to support our hypothesis that resident clinical productivity based on patients seen per hour will be negatively affected by EMR implementation. Methods: The academic emergency department at Hartford Hospital, the principle clinical site in the University of Connecticut Emergency Medicine Residency, sees over 95,000 patients on an annual basis. This environment is unique in that pre-EMR, patient tracking and orders were performed electronically using the Sunrise system (Eclipsys Corp) for over 8 years prior to conversion to the Allscripts ED EMR in October, 2010 for all aspects of ED care. The investigators completed a random sample of days/evening/night/weekend shift productivity to obtain monthly aggregate productivity data (patients seen per hour) by year of training. Results: There was an initial 4.2% decrease of in productivity for PGY-3 residents on average from 1.44 patients per hour on average in the three blocks preceding activation of the EMR to 1.38 patients seen per hour compared in the subsequent three prior blocks. PGY 3 performance returned to baseline in the subsequent three months to 1.48 patients per hour. There was no change noted in patients seen per hour of PGY-1 and PGY-2 residents. Conclusion: While many physicians tend to assume that EMRs pose a significant barrier to productivity in the ED, in our academic emergency department, there was no lasting change on resident productivity based on the patients seen per hour metric. The minor decrease which did occur in PGY-3 residents was transient and was not apparent 3 months after the EMR was implemented. Our experience suggests that decrease in the rate of patients seen per hour in the resident population should not be considered justification to delay or avoid implementation of an EMR in the emergency department. Emory University, Atlanta, GA; 2 Children's Healthcare of Atlanta, Atlanta, GA Background: Variation in physician practice is widely prevalent and highlights an opportunity for quality improvement and cost containment. Monitoring resources used in the management of common pediatric emergency department (ED) conditions has been suggested as an ED quality metric. Objectives: To determine if providing ED physicians with severity-adjusted data on resource use and outcomes, relative to their peers, can influence practice patterns. Methods: Data on resource use by physicians were extracted from electronic medical records at a tertiary pediatric ED for four common conditions in mid-acuity (Emergency Severity Index level 3): fever, head injury, respiratory illness, and gastroenteritis. Condition-relevant resource use was tracked for lab tests (blood count, chemistry, CRP), imaging (chest x-ray, abdominal x-ray, head CT scan, abdominal CT scan), intravenous fluids, parenteral antibiotics, and intravenous ondansetron. Outcome measures included admission to hospital and ED length of stay (LOS); 72-hr return to ED (RR) was used as a balancing measure. Scorecards were constructed using box plots to show physicians their practice patterns relative to peers (the figure shows an example of the scorecard for gatroenteritis for one physician, showing resources use rates for IV fluids and labs). Blinded scorecards were distributed quarterly for five quarters using rolling-year averages. A pre/post-intervention analysis was performed with Sep 1, 2010 as the intervention date. Fisher's exact and Wilcoxon rank sum tests were used for analysis. Results: We analyzed 45,872 patient visits across two hospitals (24,834 pre-and 21,038 post-intervention), comprising 17.6% of the total ED volume during the study period. Patients were seen by 100 physicians (mean 462 patients/physician). The table shows overall physician practice in the pre-and post-intervention periods. Significant reduction in resource use was seen for abdominal/pelvic CT scans, head CT scan, chest x-rays, IV ondansetron, and admission to hospital. ED LOS decreased from 129 min to 126 min (p = 0.0003). There was no significant change in 72-hr return rate during the study period (2.2% pre-, 2.0% post-intervention). Conclusion: Feedback on comprehensive practice patterns including resource use and quality metrics can influence physician practice on commonly used resources in the ED. billboards, via iPhone application, Twitter, and text messaging. There is a paucity of data describing the accuracy of publically posted ED wait times. Objectives: To examine the accuracy of publicly posted wait times of four emergency departments within one hospital system. Methods: A prospective analysis of four ED-posted wait times in comparison to the wait times for actual patients. The main hospital system calculated and posted ED wait times every twenty minutes for all four system EDs. A consecutive sample of all patients who arrived 24/7 over a 4-week period during July and August 2011 was included. An electronic tracking system identified patient arrival date and the actual incurred wait time. Data consisted of the arrival time, actual wait time, hospital census, budgeted hospital census, and the posted ED wait time. For each ED the difference was calculated between the publicly posted ED wait time at the time of patient's arrival and the patient's actual ED wait time. The average wait times and average wait time error between the ED sites were compared using a two-tailed Student's t-test. The correlation coefficient between the differences in predicted/ actual wait times was also calculated for each ED. Results: There were 8890 wait times within the four EDs included in the analysis. The average wait time (in minutes) at each facility was: 64.0 (±62.4) for the main ED, 22.0 (±22.1) for freestanding ED (FED) #1, 25.0 (±25.6) for FED #2, and 10.0 (±12.6) for the small community ED. The average wait time error (in minutes) for each facility was 31(±61.2) for the main ED, 13 (±23.65) for FED #1, 17 (±26.65) for FED #2, and 1 (±11.9) for the community hospital ED. The results from each ED were statistically significant for both average wait time and average wait time error (p < 0.0001). There was a positive correlation between the average wait time and average wait time error, with R-values of 0.84, 0.83, 0.58, and 0.48 for the main ED, FED #1, FED #2, and the small community hospital ED, respectively. Each correlation was statistically significant; however, no correlation was found between the number of beds available (budgeted-actual census) and average wait times. Conclusion: Publically posted ED wait times are accurate for facilities with less than 2000 ED visits per month. They are not accurate for EDs with greater than 4000 visits per month. Reduction of Pre-analytic Laboratory Errors in the Emergency Department Using an Incentive-Based System Benjamin Katz, Daniel Pauze, Karen Moldveen Albany Medical Center, Albany, NY Background: Over the last decade, there has been an increased effort to reduce medical errors of all kinds. Laboratory errors have a significant effect on patient care, yet they are usually avoidable. Several studies suggest that up to 90% of laboratory errors occur during the pre-or post-analytic phase. In other words, errors occur during specimen collection and transport or reporting of results, rather than during laboratory analysis itself. Objectives: In an effort to reduce pre-analytic laboratory errors, the ED instituted an incentive-based program for the clerical staff to recognize and prevent specimen labeling errors from reaching the patient. This study sought to demonstrate the benefit of this incentive-based program. Methods: This study examined a prospective cohort of ED patients over a three year period in a tertiary care academic ED with annual census of 72,000. As part of a continuing quality improvement process, laboratory specimen labeling errors are screened by clerical staff by reconciling laboratory specimen label with laboratory requisition labels. The number of ''near-misses'' or mismatched specimens captured by each clerk was then blinded to all patient identifiers and was collated by monthly intervals. Due to poor performance in 2009, an incentive program was introduced in early 2010 by which the clerk who captured the most mismatched specimens would be awarded a $50 gift card on a quarterly basis. The total number of missed laboratory errors was then recorded on a monthly basis. Investigational data were analyzed using bivariate statistics. Background: Most studies on operational research have been focused in academic medical centers, which typically have larger volumes of patients and are located in urban metropolitan areas. As CMS core measures in 2013 begin to compare emergency departments (EDs) on treatment time intervals, especially length of stay (LOS), it is important to explore if any differences exist inherent to patient volume. Objectives: The objective of this study is to look at differences in operational metrics based on annual patient census. The hypothesis is that treatment time intervals and operational metrics differ amongst these different categories. Methods: The ED Benchmarking Alliance has collected yearly operational metrics since 2004. As of 2010, there are 499 EDs providing data across the United States. EDs are stratified by annual volume for comparison in the following categories: <20K, 20-40K, 40-60K, and over 80K. In this study, metrics for EDs with <20K visits per year were compared to those of different volumes, averaged from 2004-2010. Mean values were compared to <20K visits as a reference point for statistical difference using t-tests to compare means with a p-value < 0.05 considered significant. Results: As seen in the table, a greater percentage of high acuity of patients was seen in higher volume EDs than in <20K EDs. The percentage of patients transferred to another hospital was higher in <20K EDs. A higher percentage arrived by EMS and a higher percentage were admitted in higher volume EDs when compared to <20K visits. In addition, the median LOS for both discharged and admitted patients and percentage who left before treatment was complete (LBTC) were higher in the higher volume EDs. Conclusion: Lower volume EDs have lower acuity when compared to higher volume EDs. Lower volume EDs have shorter median LOS and left before treatment complete percentages. As CMS core measures require hospitals to report these metrics, it will be important to compare them based on volume and not in aggregate. Does the Addition of a Hands-Free Communication Device Improve ED Interruption Times? Amy Ernst, Steven J. Weiss, Jeffrey A. Reitsema University of New Mexico, Albuquerque, NM Background: ED interruptions occur frequently. Recently a hands-free communication device (Vocera) was added to a cell phone and a pager in our ED. Objectives: The purpose of the present study was to determine whether this addition improved interruption times. Our hypothesis was that the device would significantly decrease length of time of interruptions. Methods: This study was a prospective cohort study of attending ED physician calls and interruptions in a Level I trauma center with EM residency. Interruptions included phone calls, EKG interpretations, pages to resuscitation, and other miscellaneous interruptions (including nursing issues, laboratory, EMS, and radiology). We studied a convenience sampling intended to include mostly evening shifts, the busiest ED times. Length of time the interruption lasted was recorded. Data were collected for a comparison group pre-Vocera. Three investigators collected data including seven different addendings' interruptions. Data were collected on a form, then entered into an Excel file. Data collectors' agreement was determined during two additional four hour shifts to calculate a kappa statistic. SPSS was used for data entry and statistical analysis. Descriptive statistics were used for univariate data. Chi-square and Mann Whitney U nonparametric test were used for comparisons. Results: Of the total 511 interruptions, 33% were phone calls, 24% were EKGs to be read, 18% were pages to resuscitation, and 25% miscellaneous. There were no significant differences in types of interruptions pre-vs. post-Vocera. Pre-Vocera we collected 40 hours of data with 65 interruptions with a mean 1.6 per hour. Post-Vocera, 180 hours of data were collected with 446 interruptions with a mean 2.5 per hour. There was a significant difference in length of time of interruptions with an average of 9 minutes pre-Vocera vs. 4 minutes post-Vocera (p = 0.012, diff 4.9, 95% CI 1.8-8.1). Vocera calls were significantly shorter than non-Vocera calls (1 vs 6 minutes, p < 0.001). Comparing data collectors for type of interruption during the same 4-hour shift resulted in a kappa (agreement) of 0.73. Conclusion: The addition of a hands-free communication device may improve interruptions by shortening call length. '' Talk Background: Analyses of patient flow through the ED typically focus on metrics such as wait time, total length of stay (LOS), or boarding time. However, little is known about how much interaction a patient has with clinicians after being placed in a room, or what proportion of the in-room visit is also spent ''waiting,'' rather than directly interacting with care providers. Objectives: The objective was to assess the proportion of time, relative to the time in a patient care area, that a patient spends actively interacting with providers during an ED visit. Methods: A secondary analysis of 29 audiotaped encounters of patients with one of four diagnoses (ankle sprain, back pain, head injury, laceration) was performed. The setting was an urban, academic ED. ED visits of adult patients were recorded from the time of room placement to discharge. Audiotapes were edited to remove all downtime and non-patient-provider conversations. LOS and door-to-doctor times were abstracted from the medical record. The proportion of time the patient spent in direct conversation with providers (''talk-time'') was calculated as the ratio of the edited audio recording time to the time spent in a patient care area (talk-time = [edited audio time/(LOS -door-to-doctor)]). Multiple linear regression controlling for time spent in patient care area, age, and sex was performed. Results: The sample was 31% male with a mean age of 37 years. Median LOS: 133 minutes (IQR: 88-169), median door-to-doctor: 42 minutes (IQR: 29-67), median time spent in patient care area: 65 minutes (IQR: 53-106). Median time spent in direct conversation with providers was 16 minutes (IQR: 12-18), corresponding to a talk-time percentage of 19.2% (IQR: 14.7-24.6%). There were no significant differences based on diagnosis. Regression analysis showed that those spending a longer time in a patient care area had a lower percentage of talk time (b = )0.11, p = 0.002). Conclusion: Although limited by sample size, these results indicate that approximately 80% of a patients' time in a care area is spent not interacting with providers. While some of the time spent waiting is out of the providers' control (e.g. awaiting imaging studies), this significant ''downtime'' represents an opportunity for both process improvement efforts to decrease downtime as well as the development of innovative patient education efforts to make the best use of the remaining downtime. Degradation of Emergency Department Operational Data Quality During Electronic Health Record Implementation Michael J. Ward, Craig Froehle, Christopher J. Lindsell University of Cincinnati, Cincinnati, OH Background: Process improvement initiatives targeted at operational efficiency frequently use electronic timestamps to estimate task and process durations. Errors in timestamps hamper the use of electronic data to improve a system and may result in inappropriate conclusions about performance. Despite the fact that the number of electronic health record (EHR) implementations is expected to increase in the U.S., the magnitude of this EHR-induced error is not well established. Objectives: To estimate the change in the magnitude of error in ED electronic timestamps before and after a hospital-wide EHR implementation. Methods: Time-and-motion observations were conducted in a suburban ED, annual census 35,000, after receiving IRB approval. Observation was conducted 4 weeks pre-and 4 weeks post-EHR implementation. Patients were identified on entering the ED and tracked until exiting. Times were recorded to the nearest second using a calibrated stopwatch, and are reported in minutes. Electronic data were extracted from the patient-tracking system in use pre-implementation, and from the EHR post-implementation. For comparison of means, independent t-tests were used. Chi-square and Fisher's t-tests were used for proportions, as appropriate. Results: There were 263 observations; 126 before and 137 after implementation. The differences between observed times and timestamps were computed and found to be normally distributed. Post-implementation, mean physician seen times along with arrival to bed, bed to physician, and physician to disposition intervals occurred before observation. Physician seen timestamps were frequently incorrect and did not improve postimplementation. Significant discrepancies (ten minutes or greater) from observed values were identified in timestamps involving disposition decision and exit from the ED. Calculating service time intervals resulted in every service interval (except arrival to bed) having at least 15% of the times with significant discrepancies. It is notable that missing values were more frequent post-EHR implementation. Conclusion: EHR implementation results in reduced variability of timestamps but reduced accuracy and an increase in missing timestamps. Using electronic timestamps for operational efficiency assessment should recognize the magnitude of error, and the compounding of error, when computing service times. Background: Procedural sedation and analgesia is used in the ED in order to efficiently and humanely perform necessary painful procedures. The opposing physiological effects of ketamine and propofol suggest the potential for synergy, and this has led to interest in their combined use, commonly termed ''ketofol'', to facilitate ED procedural sedation. Objectives: To determine if a 1:1 mixture of ketamine and propofol (ketofol) for ED procedural sedation results in a 13% or more absolute reduction in adverse respiratory events compared to propofol alone. Methods: Participants were randomized to receive either ketofol or propofol in a double-blind fashion according to a weight-based dosing protocol. Inclusion criteria were age 14 years or greater, and ASA Class 1-3 status. The primary outcome was the number and proportion of patients experiencing an adverse respiratory event according to pre-defined criteria (the ''Quebec Criteria''). Secondary outcomes were sedation consistency, sedation efficacy, induction time, sedation time, procedure time, and adverse events. Results: A total of 284 patients were enrolled, 142 per group. Forty-three (30%) patients experienced an adverse respiratory event in the ketofol group compared to 46 (32%) in the propofol group (difference 2%; 95% CI )9% to 13%; p = 0.798). Thirty-eight (27%) patients receiving ketofol and 36 (25%) receiving propofol developed hypoxia, of whom three (2%) ketofol patients and 1 (1%) propofol patient received bag-valve-mask ventilation. Sixty-five (46%) patients receiving ketofol and 93 (65%) receiving propofol required repeat medication dosing or lightened to a Ramsay Sedation Score of 4 or less during their procedure (difference 19%; 95% CI 8% to 31%; p = 0.001). Procedural agitation occurred in 5 patients (3.5%) receiving ketofol compared to 15 (11%) receiving propofol (difference 7.5%, 95% CI 1% to 14%). Recovery agitation requiring treatment occurred in six patients (4%, 95% CI 2.0% to 8.9%) receiving ketofol. Other secondary outcomes were similar between the groups. Patients and staff were highly satisfied with both agents. Conclusion: Ketofol for ED procedural sedation does not result in a reduced incidence of adverse respiratory events compared to propofol alone. Induction time, efficacy, and sedation time were similar; however, sedation depth appeared to be more consistent with ketofol. with propofol and its safety is well established. However, in 2010 CMS enacted guidelines defining propofol as deep sedation and requiring administration by a physician. Common EDPS practice had been one physician performing both the sedation and procedure. EDPS has proven safe under this one-physician practice. However, the 2010 guidelines mandated separate physicians perform each. Objectives: The study hypothesis was that one-physician propofol sedation complication rates are similar to two-physician. Methods: Before and after, observational study of patients >17 years of age consenting to EDPS with propofol. EDPS completed with one physician were compared to those completed with two (separate physicians performing the sedation and the procedure). All data were prospectively collected. The study was completed at an urban Level I trauma center. Standard monitoring and procedures for EDPS were followed with physicians blinded to the objectives of this research. The frequency and incremental dosing of medication was left to the discretion of the treating physicians. The study protocol required an ED nurse trained in data collection to be present to record vital signs and assess for any prospectively defined complications. We used chi-square tests to compare the binary outcomes and ASA scores across the time periods, and two-sample t-tests to test for differences in age between the two time periods. Results: During the 2-year study period we enrolled 481 patients: 252 one-physician EDPS sedations and 229 3 (-7 to 13) Also received bag-valve-mask 3 (2) [0.7 to 6) 1 (1) [0.1 to 4] 1 (-2 to 5) two-physician. All patients meeting inclusion criteria were included in the study. Total adverse event rates were 4.4% and 3.1%, respectively (p = 0.450). The most common complications were hypotension and oxygen desaturation, and they respectively showed one-physcian rates of 2.0% and 0.8% and two-physician rates of 1.8% and 0.9% (p = 0.848 and 0.923.) The unsuccessful procedure rates were 4.0% vs 3.9% (p = 0.983). Conclusion: This study demonstrated no significant difference in complication rates for propofol EDPS completed by one physician as compared to two. Background: Overdose patients are often monitored using pulse oximetry, which may not detect changes in patients on high-flow oxygen. Objectives: To determine whether changes in end-tidal carbon dioxide (ETCO 2 ) detected by capnographic monitoring are associated with clinical interventions due to respiratory depression (CRD) in patients undergoing evaluation for a decreased level of consciousness after a presumed drug overdose. Methods: This was a prospective, observational study of adult patients undergoing evaluation for a drug overdose in an urban county ED. All patients received supplemental oxygen. Patients were continuously monitored by trained research associates. The level of consciousness was recorded using the Observer's Assessment of Alertness/Sedation scale (OAA/S). Vital signs, pulse oximetry, and OAA/S were monitored and recorded every 15 minutes and at the time of occurrence of any CRD. Respiratory rate and ETCO 2 were measured at five second intervals using a Capno-Stream20 monitor. CRD included an increase in supplemental oxygen, the use of bag-valve-mask ventilations, repositioning to improve ventilation, and physical or verbal stimulus to induce respiration, and were performed at the discretion of the treating physicians and nurses. Changes from baseline in ETCO 2 values and waveforms among patients who did or did have a clinical intervention were compared using Wilcoxon rank sum tests. Results: 100 patients were enrolled in the study (age 35, range 18 to 67, 62% male, median OAAS 4, range 1 to 5). Suspected overdoses were due to opioids in 34, benzodiazepines in 14, an antipsychotic in 14, and others in 38. The median time of evaluation was 165 minutes (range 20 to 725). CRD occurred in 47% of patients, including an increase in O 2 in 38%, repositioning in 14%, and stimulation to induce respiration in 23%. 16% had an O 2 saturation of <93% (median 88, range 73 to 92) and 8% had a loss of ETCO 2 waveform at some time, all of whom had a CRD. The median change in ETCO 2 from baseline was 5 mmHg, range 1 to 30. Among patients with CRD it was 14 mmHg, range 10 to 30, and among patients with no CRD it was 5 mmHg, range 1 to 13 (p = 0.03). Conclusion: The change in ETCO 2 from baseline was larger in patients who required clinical interventions than in those who did not. In patients on high-flow oxygen, capnographic monitoring may be sensitive to the need for airway support. How Reliable Are Health Care Providers in Reporting Changes in ETCO 2 Waveform Anas Sawas 1 , Scott Youngquist 1 , Troy Madsen 1 , Matthew Ahern 1 , Camille Broadwater-Hollifield 1 , Andrew Syndergaard 1 , Jared Phelps 2 , Bryson Garbett 1 , Virgil Davis 1 1 University of Utah, Salt Lake City, UT; 2 Midwestern University, Glendale, AZ Background: ETCO 2 changes have been used in procedural sedation analgesia (PSA) research to evaluate subclinical respiratory depression associated with sedation regiments. Objectives: To evaluate the accuracy of bedside clinician reporting of changes in ETCO 2 . Methods: This was a prospective, randomized, singleblind study conducted in ED setting from June 2010 until the present time. This study took place at an academic adult ED of a 405-bed (21 in the ED) and a Level I trauma center. Subjects were randomized to receive either ketamine-propofol or propofol according to a standardized protocol. Loss of ETCO 2 waveforms for ‡ 15 sec were recorded. Following sedation, questionnaires were completed by the sedating physicians. Digitally recorded ETCO 2 waveforms were also reviewed by an independent physician and a trained research assistant (RA). To ensure the reliability of trained research assistants, we compared their analyses with the analyses of an independent physician for the first 41 recordings. The target enrollment was 65 patients in each group (N = 130 total). Statistics were calculated using SAS statistical software. Results: 91 patients were enrolled; 53 (58.2%) are males and 38 (41.8%) are females. Mean age was 44.93 ± 17.93 years. Most participants did not have major risk factors for apnea or for further complications (86.3% were ASA class 1 or 2). ETCO 2 waveforms were reviewed by 87 (95.6%) sedating physicians and 84 (92.3%) nurses at the bedside. There were 70 (76.9%) ETCO 2 waveforms recordings, 42 (60.0%) were reviewed by an independent physician and 70 (100%) were reviewed by an RA. A kappa test for agreement between independent physicians and RAs was conducted on 41 recordings and there were no discordant pairs (kappa = 1). Compared to sedating physicians, the independent physician was more likely to report ETCO 2 wave losses (OR 1.37, 95% CI 1.08-1.73). Compared to sedating physicians, RAs were more likely to report ETCO 2 wave losses (OR 1.39, 95% CI 1.14-1.70). Conclusion: Compared to sedating physicians at the bedside, independent physicians and RAs were more likely to note ETCO 2 waveform losses. An independent review of recorded ETCO 2 waveform changes will be more reliable for future sedation research. Background: Comprehensive studies evaluating current practices of ED airway management in Japan are lacking. Many emergency physicians in Japan still experience resistance regarding rapid sequence intubation (RSI). Objectives: We sought to describe the success and complication rate of RSI with non-RSI. Methods: Design and Setting: We conducted a multicenter prospective observational study using the JEAN registry of EDs at 11 academic and community hospitals in Japan during between 2010 and 2011. Data fields include ED characteristics, patient and operator demographics, method of airway management, number of attempts, and adverse events. We defined non-RSI as intubation with sedation only, neuromuscular blockade only, and without medication. Participants: All patients undergoing emergency intubation in ED were eligible for inclusion. Cardiac arrest encounters were excluded from the analysis. Primary analysis: We described RSI with non-RSI in terms of success rate on first attempt, within three attempts, and complication rate. We present descriptive data as proportions with 95% confidence intervals (CIs). We report odds ratios (OR) with 95% CI via chi-square testing. Results: The database recorded 2710 intubations (capture rate 98%) and 1670 met the inclusion criteria. RSI was the initial method chosen in 489 (29%) and non-RSI in 1181 (71%). Use of RSI varied among institutes from 0% to 79%. Success cases of RSI on first and within three attempts are 353 intubations (72%, 95%CI 68%-76%) and 474 intubations (97%, 95%CI 95%-98%), respectively. The success cases of non-RSI on first and within three attempts are 724 intubations (61%, 95%CI 58%-64%) and 1105 intubations (94%, 95%CI 92%-95%). Success rates of RSI on first and within three attempts are higher than non-RSI (OR 1.64, 95%CI 1.30-2.06 and OR 2.14, 95% CI 1.22-3.77, respectively). We recorded 67 complications in RSI (14%) and 165 in non-RSI (14%). There is no significant difference of complication rate between RSI and non-RSI (OR 0.98, 95% CI 0.72-1.32). Conclusion: In this multi-center prospective study in Japan, we demonstrated a high degree of variation in use of RSI for ED intubation. Additionally we found that success rate of RSI on first and within three attempts were both higher than non-RSI. This study has the limitation of reporting bias and confounding by indication. (Originally submitted as a ''late-breaker.'') Methods: This was a prospective, randomized, singleblind study conducted in the ED setting from June 2010 until the present time. This study took place at an academic adult ED of a 405-bed (21 in the ED) and a Level I trauma center. Subjects were randomized to receive either ketamine-propofol or propofol according to a standardized protocol. ETCO 2 waveforms were digitally recorded. ETCO 2 changes were evaluated by the sedating physicians at the bedside. Recorded waveforms were reviewed by an independent physician and a trained research assistant (RA). To ensure the reliability of trained RAs, we computed a kappa test for agreement between the analysis of independent physicians and RAs for the first 41 recordings. A post-hoc analysis of the association between any loss, the number of losses, and total duration of loss of ETCO 2 waveform and CRP was performed. On review we recorded the absence or presence of loss of ETCO 2 and the total duration in seconds of all lost ETCO 2 episodes ‡15 seconds. ORs were calculated using SAS statistical software. Results: 91 patients were enrolled; 53 (58.2%) are males and 38 are (41.8%) females. 86.3% participants were ASA class 1 or 2. Waveforms were reviewed by 87 (95.6%) sedating physicians. There were 70 (76.9%) waveforms recordings, 42 (60.0%) were reviewed by an independent physician and 70 (100%) were reviewed by RAs, where there were no discordant pairs (kappa = 1). There were 24 (26.4%) CRP events. Any loss of ETCO 2 was associated with a non-significant OR of 4.06 (95% CI 0.75-21.9) for CRP. However, the duration of ETCO 2 loss was significantly associated with CRP with an OR of 1.38 (95% CI 1.08-1.76) for each 30 second interval of lost ETCO 2 . The number of losses was significantly associated with the outcome (OR 1.48, 95% CI 1.15-1.91). Conclusion: Defining subclinical respiratory depression as present or absent may be less useful than quantitative measurements. This suggests that risk is cumulative over periods of loss of ETCO 2 , and the duration of loss may be a better marker of sedation depth and risk of complications than classification of any loss. Background: ED visits present an opportunity to deliver brief interventions (BIs) to reduce violence and alcohol misuse among urban adolescents at risk for future injury. Previous analyses demonstrated that a brief intervention resulted in reductions in violence and alcohol consequences up to 6 months. Objectives: This paper describes findings examining the efficacy of BIs on peer violence and alcohol misuse at 12 months. Methods: Patients (14-18 yrs) at an ED reporting past year alcohol use and aggression were enrolled in the RCT, which included computerized assessment, and randomization to control group or BI delivered by a computer (CBI) or therapist assisted by a computer (TBI). Baseline and 12 months included violence (peer aggression, peer victimization, violence related consequences) and alcohol (alcohol misuse, binge drinking, alcohol-related consequences). Results: 3338 adolescents were screened (88% participation). Of those, 726 screened positive for violence and alcohol use and were randomized; 84% completed 12-month follow-up. As compared to the control group, the TBI group showed significant reductions in peer aggression (p < 0.01) and peer victimization (p < 0.05) at 12 months. BI and control groups did not differ on alcohol-related variables at 12 months. Conclusion: Evaluation of the SafERteens intervention one year following an ED visit provides support for the efficacy of computer-assisted therapist brief intervention for reducing peer violence. Violence Against ED Health Care Workers: A 9-Month Experience Terry Kowalenko 1 , Donna Gates 2 , Gordon Gillespie 2 , Paul Succop 2 1 University of Michigan, Ann Arbor, MI; 2 University of Cincinnati, Cincinnati, OH Background: Health care (HC) support occupations have an injury rate nearly 10 times that of the general sector due to assaults, with doctors and nurses nearly 3 times greater. Studies have shown that the ED is at greatest risk of such events compared to other HC settings. Objectives: To describe the incidence of violence in ED HC workers over 9 months. Specific aims were to 1) identify demographic, occupational, and perpetrator factors related to violent events; 2) identify the predictors of acute stress response in victims; and 3) identify predictors of loss of productivity after the event. Methods: Longitudinal, repeated methods design was used to collect monthly survey data from ED HC workers (W) at six hospitals in two states. Surveys assessed the number and type of violent events, and feelings of safety and confidence. Victims also completed specific violent event surveys. Descriptive statistics and a repeated measure linear regression model were used. Results: 213 ED HCWs completed 1795 monthly surveys, and 827 violent events were reported. The average per person violent event rate per 9 months was 4.15. 601 events were physical threats (3.01 per person in 9 months). 226 events were assaults (1.13 per person in 9 months). 501 violent event surveys were completed, describing 341 physical threats and 160 assaults with 20% resulting in injuries. 63% of the physical threats and 52% of the assaults were perpetrated by men. Comparing occupational groups revealed significant differences between nurses and physicians for all reported events (p = 0.0048), with the greatest difference in physical threats (p = 0.0447). Nurses felt less safe than physicians (p = 0.0041). Physicians felt more confident than nurses in dealing with the violent patient (p = 0.013). Nurses were more likely to experience acute stress than physicians (p < 0.001). Acute stress significantly reduced productivity in general (p < 0.001), with a significant negative effect on ''ability to handle/ manage workload'' (p < 0.001) and ''ability to handle/ manage cognitive demands'' (p < 0.05). Conclusion: ED HCWs are frequent victims of violence perpetrated by visitors and patients. This violence results in injuries, acute stress, and loss of productivity. Acute stress has negative consequences on the workers' ability to perform their duties. This has serious potential consequences to the victim as well as the care they provide to their patients. A Randomized Controlled Feasibility Trial of Vacant Lot Greening to Reduce Crime and Increase Perceptions of Safety Eugenia C. Garvin, Charles C. Branas Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA Background: Vacant lots, often filled with trash and overgrown vegetation, have been associated with intentional injuries. A recent quasi-experimental study found a significant decrease in gun crimes around vacant lots that had been greened compared with control lots. Objectives: To determine the feasibility of a randomized vacant lot greening intervention, and its effect on police-reported crime and perceptions of safety. Methods: For this randomized controlled feasibility trial of vacant lot greening, we partnered with the Pennsylvania Horticulture Society (PHS) to perform the greening intervention (cleaning the lots, planting grass and trees, and building a wooden fence around the perimeter). We analyzed police crime data and interviewed people living around the study vacant lots (greened and control) about perceptions of safety before and after greening. Results: A total of 5200 sq ft of randomly selected vacant lot space was successfully greened. We used a master database of 54,132 vacant lots to randomly select 50 vacant lot clusters. We viewed each cluster with the PHS to determine which were appropriate to send to the City of Philadelphia for greening approval. The vacant lot cluster highest on the random list to be approved by the City of Philadelphia was designated the intervention site, and the next highest was designated the control site. Overall, 29 participants completed baseline interviews, and 21 completed follow-up interviews after 3 months. 59% of participants were male, 97% were black or African American, and 52% had a household income less than $25,000. Unadjusted difference-in-differences estimates showed a decrease in gun assaults around greened vacant lots compared to control. Regression-adjusted estimates showed that people living around greened vacant lots reported feeling safer after greening compared to those who lived around control vacant lots (p < 0.01). Conclusion: Conducting a randomized controlled trial of vacant lot greening is feasible. Greening may reduce certain gun crimes and make people feel safer. However, larger prospective trials are needed to further investigate this link. Screening for Violence Identifies Young Adults at Risk for Return ED Visits for Injury Abigail Hankin-Wei, Brittany Meagley, Debra Houry Emory University, Atlanta, GA Background: Homicide is the second leading cause of death among youth ages 15-24. Prior studies, in nonhealth care settings, have shown associations between violent injury and risk factors including exposure to community violence, peer behavior, and delinquency. Objectives: To assess whether self-reported exposure to violence risk factors can be used to predict future ED visits for injuries. Methods: We conducted a prospective cohort study in the ED of a Southeastern US Level I trauma center. Patients aged 15-24 presenting for any chief complaint were included unless they were critically ill, incarcerated, or could not read English. Recruitment took place over six months, by a trained research assistant (RA). The RA was present in the ED for 3-5 days per week, with shifts scheduled such that they included weekends and weekdays, over the hours from 8 am-8 pm. Patients were offered a $5 gift card for participation. At the time of initial contact in the ED, patients completed a written questionnaire which included validated measures of the following risk factors: a) aggression, b) perceived likelihood of violence, c) recent violent behavior, d) peer behavior, e) community exposure to violence, and f) positive future outlook. At 12 months following the initial ED visit, the participants' medical records were reviewed to identify any subsequent ED visits for injury-related complaints. Data were analyzed with chi-square and logistic regression analyses. Results: 332 patients were approached, of whom 300 patients consented. Participants' average age was 21.1 years, with 57% female, and 86% African American. Return visits for injuries were significantly associated with hostile/aggressive feelings (RR 3.7, CI 1.42, 9) , self-reported perceived likelihood of violence (RR 5.16, CI 1.93, 13.78) , recent violent behavior (RR 3.16, CI 1.01, 9.88) , and peer group violence (RR 4.4, CI 1.72, 11.25) . These findings remained significant when controlling for participant sex. Conclusion: A brief survey of risk factors for violence is predictive of return visit to the ED for injury. These findings identify a potentially important tool for primary prevention of violent injuries among young adults visiting the ED for both injury and non-injury complaints. Background: Sepsis is a commonly encountered disease in ED, with high mortality. While several clinical prediction rules (CPR) including MEDS, SIRS, and CURB-65 exist to facilitate clinicians in early recognition of risk of mortality for sepsis, most are of suboptimal performance. Objectives: To derive a novel CPR for mortality of sepsis utilizing clinically available and objective predictors in ED. Methods: We retrospectively reviewed all adult septic patients who visited the ED at a tertiary hospital during the year 2010 with two sets of blood cultures ordered by physicians. Basic demographics, ED vital signs, symptoms and signs, underlying illnesses, laboratory findings, microbiological results, and discharge status were collected. Multivariate logistic regressions were used to obtain a novel CPR using predictors with <0.1 p-value tested in univariate analyses. The existing CPRs were compared with this novel CPR using AUC. Results: Of 8699 included patients, 7.6% died in hospital, 51% had diabetes, 49% were older than 65 years of age, 21% had malignancy, and 16% had positive blood bacterial culture tests. Predisposing factors including history of malignancy, liver disease, immunosuppressed status, chronic kidney disease, congestive heart failure, and older than 65 years of age were found to be associated with mortality (all p < 0.05). Patients who developed mortality tended to have lower body temperature, narrower pulse pressure, higher percentage of red cell distribution width (RDW) and bandemia, higher blood urea nitrogen (BUN), ammonia, and C-reactive protein level, and longer prothrombin time and activated partial thromboplastin time (aPTT) (all p < 0.05). The most parsimonious CPR incorporating history of malignancy (OR 2.3, 95% CI 1.9-2.7), prolonged aPTT (3.0, 2.4-3.8), presence of bandemia (1.7, 1.4-2.0 Results: There was poor agreement between the physician's unstructured assessment used in clinical practice and the guidelines put forth by the AHA/ACC/ACEP task force. ED physicians were more likely to assess a patient as low risk (42%), while AHA guidelines were more likely to classify patients as intermediate (50%) or high (40%) risk. However, when comparing the patient's final ACS diagnosis and the relation to the risk assessment value, ED physicians proved better predictors of high-risk patients who in fact had ACS, while the AHA/ACC/ACEP guidelines proved better at correctly identifying low-risk patients who did not have ACS. Conclusion: In the ED, physicians are far more efficient at correctly placing patients with underlying ACS into a high-risk category, while established criteria may be overly conservative when applied to an acute care population. Further research is indicated to look at ED physicians' risk stratification and ensuing patient care to assess for appropriate decision making and ultimate outcomes. Compartative Conclusion: The AMUSE score was more specific, but the Wells score was more sensitive for acute lower limb DVT in this cohort. There is no significant advantage in using the AMUSE over the Wells score in ED patient with suspected DVT. Background: The direct cost of medical care is not accurately reflected in charges or reimbursement. The cost of boarding admitted patients in the ED has been studied in terms of opportunity costs, which are indirect. The actual direct effect on hospital expenses has not been well defined. Objectives: We calculate the difference to the hospital in the cost of caring for an admitted patient in the ED and in a non-critical care in-patient unit. Methods: Time-directed activity-based costing (TDABC) has recently been proposed as a method of determining the actual cost of providing medical services. TDABC was used to calculate the cost per patient bed-hour both in the ED and for an in-patient unit. The costs include nursing, nursing assistants, clerks, attending and resident physicians, supervisory salaries, and equipment maintenance. Boarding hours were determined from placement of admission order to transfer to in-patient unit. A convenience sample of 100 consecutive non-critical care admissions was assessed to find the degree of ED physician involvement with boarded patients. Results: The overhead cost per patient bed-hour in the ED was $60.80. The equivalent cost per bed-hour inpatient was $23.39, a differential of $37.41. There were 27,618 boarding hours for medical-surgical patients in 2010, a differential of $1,033,189.38 for the year. For the short-stay unit (no residents), the cost per patient hour was $11.36 and the boarding hours were 11,804. This resulted in a differential cost of $583,389.76, a total direct cost to the hospital of $1,616,579.14. Review of 100 consecutive admissions showed no orders placed by the ED physician after decision-toadmit. Conclusion: Concentration of resources in the ED means considerably higher cost per unit of care as compared to an in-patient unit. Keeping admitted patients boarding in the ED results in expensive underutilization. This is exclusive of significant opportunity costs of lost revenue from walk-out and diverted patients. This study includes the cost of teaching attendings and residents (ED and in-patient) . In a non-teaching setting, the differential would be less and the cost of boarding would be shared by a fee-for-service ED physician group as well as the hospital. Improving Identification of Frequent Emergency Department Users Using a Regional Health Information Background: Frequent ED users consume a disproportionate amount of health care resources. Interventions are being designed to identify such patients and direct them to more appropriate treatment settings. Because some frequent users visit more than one ED, a health information exchange (HIE) may improve the ability to identify frequent ED users across sites of care. Objectives: To demonstrate the extent to which a HIE can identify the marginal increase in frequent ED users beyond that which can be detected with data from a single hospital. Methods: Data from 6/1/10 to 5/31/11 from the New York Clinical Information Exchange (NYCLIX), a HIE in New York City that includes ten hospitals, were analyzed to calculate the number of frequent ED users ( ‡4 visits in 30 days) at each site and across the HIE. Results: There were 10,555 (1% of total patients) frequent ED users, with 7,518 (71%) of frequent users having all their visits at a single ED, while 3,037 (29%) frequent users were identified only after counting visits to multiple EDs (Table 1) . Site-specific increases varied from 7% to 62% (SD 16.5). Frequent ED users accounted for 1% of patients, but for 6% of visits, averaging 9.74 visits per year, versus 1.55 visits per year for all other patients. 28.5% of frequent users visited two or more EDs during the study period, compared to 10.6% of all other patients. Conclusion: Frequent ED users commonly visited multiple NYCLIX EDs during the study period. The use of a HIE helped identify many additional frequent users, though the benefits were lower for hospitals not located in the relative vicinity of another NYCLIX hospital. Measures that take a community, rather than a single institution, into account may be more reflective of the care that the patient experiences. Indocyanine Background: Due to their complex nature and high associated morbidity, burn injuries must be handled quickly and efficiently. Partial thickness burns are currently treated based upon visual judgment of burn depth by the clinician. However, such judgment is only 67% accurate and not expeditious. Laser Doppler Imaging (LDI) is far more accurate -nearly 96% after 3 days. However, it is too cumbersome for routine clinical use. Laser Assisted Indocyanine Green Angiography (LAICGA) has been indicated as an alternative for diagnosing the depth of burn injuries, and possesses greater utility for clinical translation. As the preferred outcome of burn healing is aesthetic, it is of interest to determine if wound contracture can be predicted early in the course of a burn by LAIC-GA. Objectives: Determine the utility of early burn analysis using LAICGA in the prediction of 28-day wound contracture. Methods: A prospective animal experiment was performed using six anesthetized pigs, each with 20 standardized wounds. Differences in burn depth were created by using a 2.5 · 2.5 cm aluminum bar at three exposure times and temperatures: 70 degrees C for 30 seconds, 80 degrees C for 20 seconds, and 80 degrees C for 30 seconds. We have shown in prior validation experiments that these burn temperatures and times create distinct burn depths. LAICGA scanning, using Lifecell SPY Elite, took place at 1 hour, 24 hours, 48 hours, 72 hours, and 1 week post burn. Imaging was read by a blinded investigator, and perfusion trends were compared with day 28 post-burn contraction outcomes measured using ImageJ software. Biopsies were taken on day 28 to measure scar tissue depth. Results: Deep burns were characterized by a blue center indicating poor perfusion while more superficial burns were characterized by a yellow-red center indicating perfusion that was close to that of the normal uninjured adjacent skin (see figure) . A linear relationship between contraction outcome and burn perfusion could be discerned as early as 1 hour post burn, peaking in strength at 24-48 hours post-burn. Burn intensity could be effectively identified at 24 hours post-burn, although there was no relationship with scar tissue depth. Conclusion: Pilot data indicate that LAICGA using Lifecell SPY has the ability to determine the depth of injury and predict the degree of contraction of deep dermal burns within 1-2 days of injury with greater accuracy than clinical scoring. The Objectives: We hypothesize that real-time monitoring of an integrated electronic medical records system and the subsequent firing of a ''sepsis alert'' icon on the electronic ED tracking board results in improved mortality for patients who present to the ED with severe sepsis or septic shock. Methods: We retrospectively reviewed our hospital's sepsis registry and included all patients diagnosed with severe sepsis or septic shock presenting to an academic community ED with an annual census of 73,000 visits and who were admitted to a medical ICU or stepdown ICU bed between June 2009 and October 2011. In May 2010 an algorithm was added to our integrated medical records system that identifies patients with two SIRS criteria and evidence of endorgan damage or shock on lab data. When these criteria are met, a ''sepsis alert'' icon (prompt) appears next to that patient's name on the ED tracking board. The system also pages an in-house, specially trained ICU nurse who can respond on a PRN basis and assist in the patient's management. 18 months of intervention data are compared with 11 months of baseline data. Statistical analysis was via z-test for proportions. Results: For ED patients with severe sepsis, the preand post-alert mortality was 19 of 125 (15%) and 34 of 378 (9%), respectively (p = 0.084; n = 503). In the septic shock group, the pre-and post-alert mortality was 27 of 92 (29%) and 48 of 172 (28%), respectively (p = 0.977). With ED and inpatient sepsis alerts combined, the severe sepsis subgroup mortality was reduced from 17% to 9% (p = 0.013; n = 622). Conclusion: Real-time ED EHR screening for severe sepsis and septic shock patients did not improve mortality. A positive trend in the severe sepsis subgroup was noted, and the combined inpatient plus ED data suggests statistical significance may be reached as more patients enter the registry. Limitations: retrospective study, potential increased data capture post intervention, and no ''gold standard'' to test the sepsis alert sensitivity and specificity. ) . Descriptive statistics were calculated. Principal component analysis was used to determine questions with continuous response formats that could be aggregated. Aggregated outcomes were regressed onto predictor demographic variables using multiple linear regression. Results: 80/100 physicians completed the survey. Physicians had a mean of 9.8 ± 9.0 years experience in the ED. 23.8% were female. Eight physicians (10%) reported never having used the tool, while 70.8% of users estimated having used it more than five times. 75% of users cited the ''P'' alert on the ETB as the most common notification method. Most felt the ''P'' alert did not help them identify patients with pneumonia earlier (mean = 2.5 ± 1.2), but found it moderately useful in reminding them to use the tool (3.5 ± 1.3). Physicians found the tool helpful in making decisions regarding triage, diagnostic studies, and antibiotic selection for outpatients and inpatients (3.7 ± 1.0, 3.6 ± 1.1, 3.6 ± 1.1, and 4.2 ± 0.9, respectively). They did not feel it negatively affected their ability to perform other tasks (1.6 ± 0.9). Using multiple linear regression, neither age, sex, years experience, nor tool use frequency significantly predicted responses to questions about triage and antibiotic selection, technical difficulties, or diagnostic ordering. Conclusion: ED physicians perceived the tool to be helpful in managing patients with pneumonia without negatively affecting workflow. Perceptions appear consistent across demographic variables and experience. Objectives: We seek to examine whether use of the SALT device can provide reliable tracheal intubation during ongoing CPR. The dynamic model tested the device with human powered CPR (manual) and with an automated chest compression device (Physio Control Lucas 2). The hypothesis is that the predictable movement of an automated chest compression device will make tracheal intubation easier than the random movement from manual CPR. Methods: The project was an experimental controlled trial and took place in the ED at a tertiary referral center in Peoria, Illinois. This project was an expansion arm of a similarly structured study using traditional laryngoscopy. Emergency medicine residents, attending physicians, paramedics, and other ACLS-trained staff were eligible for participation. In randomized order, each participant attempted intubation on a mannequin using the SALT device with no CPR ongoing, during CPR with a manual compression, and during CPR with an automatic chest compression. Participants were timed in their attempt and success was determined after each attempt. Results: There were 43 participants in the trial. The success rates in the control group and the automated CPR group were both 86% (37/43) and the success rate in the manual CPR group was 79% (34/43 Objectives: Our primary hypothesis was that in fasting, asymptomatic subjects, larger fluid boluses would lead to proportional aortic velocity changes. Our secondary endpoints were to determine inter-and intra-subject variation in aortic velocity measurements. Methods: The authors performed a prospective randomized double-blinded trial using healthy volunteers. We measured the velocity time integral (VTI) and maximal velocity (Vmax) with an estimated 0-20°pulsed wave Doppler interrogation of the left ventricular outflow in the apical-5 cardiac window. Three physicians reviewed optimal sampling gate position, Doppler angle and verified the presence of an aortic closure spike. Angle correction technology was not used. Subjects with no history of cardiac disease or hypertension fasted for 12 hours and were then randomly assigned to receive a normal saline bolus of 2 ml/kg, 10 ml/kg or 30 ml/kg over 30 minutes. Aortic velocity profiles were measured before and after each fluid bolus. Results: Forty-two subjects were enrolled. Mean age was 33 ± 10 (range 24 to 61) and mean body mass index 24.7 ± 3.2 (range 18.7 to 32). Mean volume (in ml) for groups receiving 2 ml/kg, 10 ml/kg, and 30 ml/kg were 151, 748, and 2162, respectively. Mean baseline Vmax (in cm/s) of the 42 subjects was 108.4 ± 12.5 (range 87 to 133). Mean baseline VTI (in cm) was 23.2 ± 2.8 (range 18.2 to 30.0). Pre-and post-fluid mean differences for Vmax were )1.7 (± 10.3) and for VTI 0.7 (± 2.7). Aortic velocity changes in groups receiving 2 ml/kg, 10 ml/kg, and 30 ml/kg were not statistically significant (see table) . Heart rate changes were not significant. Background: Clinicians recognize that septic shock is a highly prevalent, high mortality disease state. Evidence supports early ED resuscitation, yet care delivery is often inconsistent and incomplete. The objective of this study was to discover latent critical barriers to successful ED resuscitation of septic shock. Objectives: Clinicians recognize that septic shock is a highly prevalent, high mortality disease state. Evidence supports early ED resuscitation, yet care delivery is often inconsistent and incomplete. The objective of this study was to discover latent critical barriers to successful ED resuscitation of septic shock. Methods: We conducted five 90-minute risk-informed in-situ simulations. ED physicians and nurses working in the real clinical environment cared for a standardized patient, introduced into their existing patient workload, with signs and symptoms of septic shock. Immediately after case completion clinicians participated in a 30minute debriefing session. Transcripts of these sessions were analyzed using grounded theory, a method of qualitative analysis, to identify critical barrier themes. Results: Fifteen clinicians participated in the debriefing sessions: four attending physicians, five residents, five nurses, and one nurse practitioner. The most prevalent critical barrier themes were: anchoring bias and difficulty with cognitive framework adaptation as the patient progressed to septic shock (n = 26), difficult interactions between the ED and ancillary departments (n = 22), difficulties with physician-nurse commu-nication and teamwork (n = 18), and delays in placing the central venous catheter due to perceptions surrounding equipment availability and the desire to attend to other competing interests in the ED prior to initiation of the procedure (n = 17 and 14). Each theme was represented in at least four of the five debriefing sessions. Participants reported the in-situ simulations to be a realistic representation of ED sepsis care. Conclusion: In-situ simulation and subsequent debriefing provides a method of identifying latent critical areas for improvement in a care process. Improvement strategies for ED-based septic shock resuscitation will need to address the difficulties in shock recognition and cognitive framework adaptation, physician and nurse teamwork, and prioritization of team effort. The Background: The association between blood glucose level and mortality in critically ill patients is highly debated. Several studies have investigated the association between history of diabetes, blood sugar level, and mortality of septic patients; however, no consistent conclusion could be drawn so far. Objectives: To investigate the association between diabetes and initial glucose level and in-hospital mortality in patients with suspected sepsis from the ED. Methods: We conducted a retrospective cohort study that consisted of all adult septic patients who visited the ED at a tertiary hospital during the year 2010 with two sets of blood cultures ordered by physicians. Basic demographics, ED vital signs, symptoms and signs, underlying illnesses, laboratory findings, microbiological results, and discharge status were collected. Logistic regressions were used to evaluate the association between risk factors, initial blood sugar level, and history of diabetes and mortality, as well as the effect modification between initial blood sugar level and history of diabetes. Results: A total of 4997 patients with available blood sugar levels were included, of whom 48% had diabetes, 46% were older than 65 years of age, and 56% were male. The mortality was 6% (95% CI 5.3-6.7%). Patients with a history of diabetes tended to be older, female, and more likely to have chronic kidney disease, lower sepsis severity (MEDS score), and positive blood culture test results (all p < 0.05). Patients with a history of diabetes tended to have lower in-hospital mortality after ED visits with sepsis, controlling for initial blood sugar level (aOR 0.72, 95% CI 0.56-0.92, p = 0.01). Initial normal blood sugar seemed to be beneficial compared to lower blood sugar level for in-hospital mortality, controlled history of diabetes, sex, severity of sepsis, and age (aOR 0.61, 95% CI 0.44-0.84, p = 0.002). The effect modification of diabetes on blood sugar level and mortality, however, was found to be not statistically significant (p = 0.09). Conclusion: Normal initial blood sugar level in ED and history of diabetes might be protective for mortality of septic patients who visited the ED. Further investigation is warranted to determine the mechanism for these effects. Methods: This IRB-approved retrospective chart review included all patients treated with therapeutic hypothermia after cardiac arrest during 2010 at an urban, academic teaching hospital. Every patient undergoing therapeutic hypothermia is treated by neurocritical care specialists. Patients were identified by review of neurocritical care consultation logs. Clinical data were dually abstracted by trained clinical study assistants using a standardized data dictionary and case report form. Medications reviewed during hypothermia were midazolam, lorazepam, propofol, fentanyl, cisatracurium, and vecuronium. Results: There were 33 patients in the cohort. Median age was 57 (range 28-86 years), 67% were white, 55% were male, and 49% had a history of coronary artery disease. Seizures were documented by continuous EEG in 11/33 (33%), and 20/33 (61%) died during hospitalization. Most, 30/33 (91%), received fentanyl, 21/33 (64%) received benzodiazepine pharmacotherapy, and 23/33 (70%) received propofol. Paralytics were administered to 23/33 (68%) patients, 14/33 (42%) with cisatracurium and 9/33 (27%) with vecuronium. Of note, one patient required pentobarbital for seizure management. Conclusion: Sedation and neuromuscular blockade are common during management of patients undergoing therapeutic hypothermia after cardiac arrest. Patients in this cohort often received analgesia with fentanyl, and sedation with a benzodiazepine or propofol. Given the frequent use of sedatives and paralytics in survivors of cardiac arrest undergoing hypothermia, future studies should investigate the potential effect of these drugs on prognostication and survival after cardiac arrest. Background: The use of therapeutic hypothermia (TH) is a burgeoning treatment modality for post-cardiac arrest patients. Objectives: We performed a retrospective chart review of patients who underwent post cardiac arrest TH at eight different institutions across the United States. Our objective was to assess how TH is currently being implemented in emergency departments and assess the feasibility of conducting more extensive TH research using multi-institution retrospective data. Methods: A total of 94 charts with dates from 2008-2011 were sent for review by participating institutions of the Peri-Resuscitation Consortium. Of those reviewed, eight charts were excluded for missing data. Two independent reviewers performed the review and the results were subsequently compared and discrepancies resolved by a third reviewer. We assessed patient demographics, initial presenting rhythm, time until TH initiation, duration of TH, cooling methods and temperature reached, survival to hospital discharge, and neurological status on discharge. Results: The majority of cases of TH had initial cardiac rhythms of asystole or pulseless electrical activity (55.2%), followed by ventricular tachycardia or fibrillation (34.5%), and in 10.3% the inciting cardiac rhythm was unknown. Time to initiation of TH ranged from 0-783 minutes with a mean time of 99 min (SD 132.5). Length of TH ranged from 25-2171 minutes with a mean time of 1191 minutes (SD 536). Average minimum temperature achieved was 32.5°C, with a range from 27.6-36.7°C (SD 1.5°C). Of the charts reviewed, 29 (33.3%) of the patients survived to hospital discharge and 19 (21.8%) were discharged relatively neurologically intact. Conclusion: Research surrounding cardiac arrest has always been difficult given the time and location span from pre-hospital care to emergency department to intensive care unit. Also, as witnessed cardiac arrest events are relatively rare with poor survival outcomes, very large sample sizes are needed to make any meaningful conclusions about TH. Our varied and inconsistent results show that a multi-center retrospective review is also unlikely to provide useful information. A prospective multi-center trial with a uniform TH protocol is needed if we are ever to make any evidence-based conclusions on the utility of TH for post-cardiac arrest patients. Serum Results: Mean LA was 2.04, SD = 1.45. Mean age was 4.5 years old, SD = 5.20. A statistically significant positive correlation was found between LA and pulse, respiratory rate (RR), WBC, platelets, and LOS, while a significant negative correlation was seen with temperature and HCO 3 -. When two subjects were dropped as possible outliers with LA >10, it resulted in non-significant temperature correlation, but a significant negative correlation with age and BUN was revealed. Patients in the higher LA group were more likely to be admitted (p = 0.0001) and have longer LOS. Of the discharged patients, there was no difference in mean LA level between those who returned (n = 25, mean LA of 1.88, SD = 0.88) and those who did not (n = 154, mean LA of 1.88, SD = 1.35), p = 0.99. Furthermore, mean LA levels for those with sepsis (n = 138, mean LA of 2.18, SD = 1.75) did not differ from those without sepsis (n = 147, mean LA of 1.9, SD = 1.08), p = 0.11. Conclusion: Higher LA in pediatric patients presenting to the ED with suspected infection correlated with increased pulse, RR, WBC, platelets, and decreased BUN, HCO 3 -, and age. LA may be predictive of hospitalization, but not of 3-day return rates or pediatric sepsis screening in the ED. Background: Mandibular fractures are one of the most frequently seen injuries in the trauma setting. In terms of facial trauma, madibular fractures account for 40-62% of all facial bone fractures. Prior studies have demonstrated that the use of a tongue blade to screen these patients to determine whether a mandibular fracture is present may be as sensitive as x-ray. One study showed the sensitivity and specificity of the test to be 95.7% and 63.5%, respectively. In the last ten years, high-resolution computed tomography (HCT) has replaced panoramic tomography (PT) as the gold standard for imaging of patients with suspected mandibular fractures. This study determines if the tongue blade test (TBT) remains as sensitive a screening tool when compared to the new gold standard of CT. Objectives: The purpose of the study was to determine the sensitivity and specificity of the TBT as compared to the new gold standard of radiologic imaging, HCT. The question being asked: is the TBT still useful as a screening tool for patients with suspected mandibular fractures when compared to the new gold standard of HCT? Methods: Design: Prospective cohort study. Setting: An urban tertiary care Level I trauma center. Subjects: This study took place from 8/1/10 to 8/31/11 in which any person suffering from facial trauma presented. Intervention: A TBT was performed by the resident physician and confirmed by the supervising attending physician. CT facial bones were then obtained for the ultimate diagnosis. Inter-rater reliability (kappa) was calculated, along with sensitivity, specificity, accuracy, PPV, NPV, likelihood ratio (LR) (+), and likelihood ratio (LR) (-) based on a 2 · 2 contingency tables generated. Results: Over the study period 85 patients were enrolled. Inter-rater reliability was kappa = 0.93 (SE +0.11). The table demonstrates the outcomes of both the TBT and CT facial bones for mandibular fracture. The following parameters were then calculated based on the contingency table: sensitivity 0.97 (CI 0.81-0.99), specificity 0.72 (CI 0.58-0.83), PPV 0.67 (CI 0.52-0.78), NPV 0.97 (CI 0.87-0.99), accuracy 0.81, LR(+) 3.48 ), LR (-) 0.04 (CI 0.01-0.31). Conclusion: The TBT is still a useful screening tool to rule out mandibular fractures in patients with facial trauma as compared to the current gold standard of HCT. Background: Appendicitis is the most common surgical emergency occurring in children. The diagnosis of pediatric appendicitis is often difficult and computerized tomography (CT) scanning is utilized frequently. CT, although accurate, is expensive, time-consuming, and exposes children to ionizing radiation. Radiologists utilize ultrasound for the diagnosis of appendicitis, but it may be less accurate than CT, and may not incorporate emergency physician (EP) clinical impression regarding degree of risk. Objectives: The current study compared EP clinical diagnosis of pediatric appendicitis pre-and post-bedside ultrasonography (BUS). Methods: Children 3-17 years of age were enrolled if their clinical attending physician planned to obtain a consultative ultrasound, CT scan, or surgical consult specific for appendicitis. Most children in the study received narcotic analgesia to facilitate BUS. Subjects were initially graded for likelihood of appendicitis based on research physician-obtained history and physical using a Visual Analogue Scale (VAS). Immediately subsequent to initial grading, research physicians performed a BUS and recorded a second VAS impression of appendicitis likelihood. Two outcome measures were combined as the gold standard for statistical analysis. The post-operative pathology report served as the gold standard for subjects who underwent appendectomy, while post 2-week telephone follow-up was used for subjects who did not undergo surgery. Various specific ultrasound measures used for the diagnosis of appendicitis were assessed as well. Results: 29/56 subjects had pathology-proven appendicitis. One subject was pathology-negative post-appendectomy. Of the 26 subjects who did not undergo surgery, none had developed appendicitis at the post 2-week telephone follow-up. Pre-BUS sensitivity was 48% (29-68%) while post-BUS sensitivity was 79% (60-92%). Both pre-and post-BUS specificity was 96% (81-100%). Pre-BUS LR+ was 13 (2-93), while post-BUS LR+ was 21 (3-148). Pre-and post-BUS LR-were 0.5 and 0.2, respectively. BUS changed the diagnosis for 20% of subjects (9-32%). Background: There are very little data on the normal distance between the glenoid rim and the posterior aspect of the humeral head in normal and dislocated shoulders. While shoulder x-rays are commonly used to detect shoulder dislocations, they may be inadequate, exacerbate pain in the acquisition of some views, and lead to delay in treatment, compared to bedside ultrasound evaluation. Objectives: Our objective was to compare the glenoid rim to humeral head distance in normal shoulders and in anteriorly dislocated shoulders. This is the first study proposing to set normal and abnormal limits. Methods: Subjects were enrolled in this prospective observation study if they had a chief complaint of shoulder pain or injury, and received a shoulder ultrasound as well as a shoulder x-ray. The sonographers were undergraduate students given ten hours of training to perform the shoulder ultrasound. They were blinded to the x-ray interpretation, which was used as the gold standard. We used a posterior-lateral approach, capturing an image with the glenoid rim, the humeral head, as well as the infraspinatus muscle. Two parallel lines were applied to the most posterior aspect of the humeral head and the most posterior aspect of the glenoid rim. A line perpendicular to these lines was applied, and the distance measured. In anterior dislocations, a negative measurement was used to denote the fact that the glenoid rim is now posterior to the most posterior aspect of the humeral head. Descriptive analysis was applied to estimate the mean and 25th to 75th interquartile range of normal and anteriorly dislocated shoulders. Results: Eighty subjects were enrolled in this study. There were six shoulder dislocations, however only four were anterior dislocations. The average distance between the posterior glenoid rim and the posterior humeral head in normal shoulders was 8.7 mm, with a 25th to 75th inter-quartile range of 6.7 mm to 11.9 mm. The distance in our four cases of anterior dislocation was )11 mm, with a 25th to 75th interquartile range of )10 mm to )12 mm. Conclusion: The distance between the posterior humeral head to posterior glenoid rim may be 7 mm to 12 mm in patients presenting to the ED with shoulder pain but no dislocation. In contrast, this distance in anterior dislocations was greater than )10 mm. Shoulder ultrasound may be a useful adjunct to x-ray for diagnosing anterior shoulder dislocations. Conclusion: In this retrospective study, the presence of RV strain on FOCUS significantly increases the likelihood of an adverse short term event from pulmonary embolism and its combination with hypotension performs similarly to other prognostic rules. Background: Burns are expensive and debilitating injuries, compromising both the structural integrity and vascular supply to skin. They exhibit a substantial potential to deteriorate if left untreated. Jackson defined three ''zones'' to a burn. While the innermost coagulation zone and the outermost zone of hyperemia display generally predictable healing outcomes, the zone of stasis has been shown to be salvageable via clinical intervention. It has therefore been the focus of most acute therapies for burn injuries. While Laser Doppler Imaging (LDI) -the current gold standard for burn analysis -has been 96% effective at predicting the need for second degree burn excision, its clinical translation is problematic, and there is little information regarding its ability to analyze the salvage of the stasis zone in acute injury. Laser Assisted Indocyanine Green Dye Angiography (LAICGA) also shows potential to predict such outcomes with greater clinical utility. Objectives: To test the ability of LDI and LAICGA to predict interspace (zone of stasis) survival in a horizontal burn comb model. Methods: A prospective animal experiment was performed using four pigs. Each pig had a set of six dorsal burns created using a brass ''comb'' -creating four rectangular 10 · 20 mm full thickness burns separated by 5 · 20 mm interspaces. LAICGA and LDI scanning took place at 1 hour, 24 hours, 48 hours, and 1 week post burn using Novadaq SPY and Moor LDI respectively. Imaging was read by a blinded investigator, and perfusion trends were compared with interspace viability and contraction. Burn outcomes were read clinically, evaluated via histopathology, and interspace contraction was measured using Image J software. Results: LAICGA data showed significant predictive potential for interspace survival. It was 83.3% predictive at 24 hours post burn, 75% predictive 48 hours post burn, and 100% predictive 7 days post burn using a standardized perfusion threshold. LDI imaging failed to predict outcome or contraction trends with any degree of reliability. The pattern of perfusion also appears to be correlated with the presence of significant interspace contraction at 28 days, with an 80% adherence to a power trendline. ventions, 11 Isolation, 4 Testing, 4 Treatment, and 1 ''Other'' category intervention were identified. One intervention involving school closures was associated with a 28% decrease in pediatric ED visits for respiratory illness. Conclusion: Most interventions were not tested in isolation, so the effect of individual interventions was difficult to differentiate. Interventions associated with statistically significant decreases in ED crowding were school closures, as well as interventions in all categories studied. Further study and standardization of intervention input, process, and outcome measures may assist in identifying the most effective methods of mitigating ED crowding and improving surge capacity during an influenza or other respiratory disease outbreak. Communication Background: The link between extended shift lengths, sleepiness, and occupational injury or illness has been shown, in other health care populations, to be an important and preventable public health concern but heretofore has not been fully described in emergency medical services (EMS Objectives: To assess the effect of an ED-based computer screening and referral intervention for IPV victims and to determine what characteristics resulted in a positive change in their safety. We hypothesized that women who were experiencing severe IPV and/or were in contemplation or action stages would be more likely to endorse safety behaviors. Methods: We conducted the intervention for female IPV victims at three urban EDs using a computer kiosk to deliver targeted education about IPV and violence prevention as well as referrals to local resources. All adult English-speaking non-critically ill women triaged to the ED waiting room were eligible to participate. The validated Universal Violence Prevention Screening Protocol was used for IPV screening. Any who disclosed IPV further responded to validated questionnaires for alcohol and drug abuse, depression, and IPV severity. The women were assigned a baseline stage of change (precontemplation, contemplation, action, or maintenance) based on the URICA scale for readiness to change behavior surrounding IPV. Participants were contacted at 1 week and 3 months to assess a variety of pre-determined actions such as moving out, to prevent IPV during that period. Statistical analysis (chi-square testing) was performed to compare participant characteristics to the stage of change and whether or not they took protective action. Results: A total of 1,474 people were screened and 154 disclosed IPV and participated in the full survey. 53.3% of the IPV victims were in the precontemplative stage of change, and 40.3% were in the contemplation stage. 110 women returned at 1 week of follow-up (71.4%), and 63 (40.9%) women returned at 3 months of followup. 55.5% of those who returned at 1 week, and 73% of those who returned at 3 months took protective action against further IPV. There was no association between the various demographic characteristics and whether or not a woman took protective action. Conclusion: ED-based kiosk screening and health information delivery is both a feasible and effective method of health information dissemination for women experiencing IPV. Stage of change was not associated with actual IPV protective measures. Objectives: We present a pilot, head-to-head comparison of X26 and X2 effectiveness in stopping a motivated person. The objective is to determine comparative injury prevention effectiveness of the newer CEW. Methods: Four humans had metal CEW probe pairs placed. Each volunteer had two probe pairs placed (one pair each on the right and left of the abdomen/inguinal region). Superior probes were at the costal margin, 5 inches lateral of midline. Inferior probes were vertically inferior at predetermined distances of 6, 9, 12, and 16 inches apart. Each volunteer was given the goal of slashing a target 10 feet away with a rubber knife during CEW exposure. As a means of motivation, they believed the exposure would continue until they reached the goal (in reality, the exposure was terminated once no further progress was made). Each volunteer received one exposure from a X26 and a X2 CEW. The exposure order was randomized with a 2-minute rest between them. Exposures were recorded on a hi-speed, hi-resolution video. Videos were reviewed and scored by six physician, kinesiology, and law officer experts using standardized criteria for effectiveness including degree of upper and lower extremity, and total body incapacitation, and degree of goal achievement. Reviews were descriptively compared independently for probe spread distances and between devices. Results: There were 8 exposures (4 pairs) for evaluation and no discernible, descriptive reviewer differences in effectiveness between the X26 and the X2 CEWs when compared. Background: The trend towards higher gasoline prices over the past decade in the U.S. has been associated with higher rates of bicycle use for utilitarian trips. This shift towards non-motorized transportation should be encouraged from a physical activity promotion and sustainability perspective. However, gas price induced changes in travel behavior may be associated with higher rates of bicycle-related injury. Increased consideration of injury prevention will be a critical component of developing healthy communities that help safely support more active lifestyles. Objectives: The purpose of this analysis was to a) describe bicycle-related injuries treated in U.S. emergency departments between 1997 and 2009 and b) investigate the association between gas prices and both the incidence and severity of adult bicycle injuries. We hypothesized that as gas prices increase, adults are more likely to shift away from driving for utilitarian travel toward more economical non-motorized modes of transportation, resulting in increased risk exposure for bicycle injuries. Methods: Bicycle injury data for adults (16-65 years) were obtained from the National Electronic Injury Surveillance System (NEISS) database for emergency department visits between 1997-2009. The relationship between national seasonally adjusted monthly rates of bicycle injuries, obtained by a seasonal decomposition of time series, and average national gasoline prices, reported by the Energy Information Administration, was examined using a linear regression analysis. Results: Monthly rates of bicycle injuries requiring emergency care among adults increase significantly as gas prices rise (p < 0.0001, see figure) . An additional 1,149 adult injuries (95% CI 963-1,336) can be predicted to occur each month in the U.S. (>13,700 injuries annually) for each $1 rise in average gasoline price. Injury severity also increases during periods of high gas prices, with a higher percentage of injuries requiring admission. Conclusion: Increases in adult bicycle use in response to higher gas prices are accompanied by higher rates of significant bicycle-related injuries. Supporting the use of non-motorized transportation will be imperative to address public health concerns such as obesity and climate change; however, resources must also be dedicated to improve bicycle-related injury care and prevention. Background: This is a secondary analysis of data collected for a randomized trial of oral steroids in emergency department (ED) musculoskeletal back pain patients. We hypothesized that higher pain scores in the ED would be associated with more days out of work. Objectives: To determine the degree to which days out of work for ED back pain patients are correlated with ED pain scores. Methods: Design: Prospective cohort. Setting: Suburban ED with 80,000 annual visits. Participants: Patients aged 18-55 years with moderately severe musculoskeletal back pain from a bending or twisting injury £ 2 days before presentation. Exclusion criteria included nonmusculoskeletal etiology, direct trauma, motor deficits, and employer-initiated visits. Observations: We captured initial and discharge ED visual analog pain scores (VAS) on a 0-10 scale. Patients were contacted approximately 5 days after discharge and queried about the days out of work. We plotted days out of work versus initial VAS, discharge VAS, and change in VAS and calculated correlation coefficients. Using the Bonferroni correction because of multiple comparisons, alpha was set at 0.02. Results: We analyzed 67 patients for whom complete data were available. The mean age was 40 ± 9 years and 30% were female. The average initial and discharge ED pain scales were 8.0 ± 1.5 and 5.7 ± 2.2, respectively. On follow-up, 88% of patients were back to work and 36% did not lose any days of work. For the plots of the days out of work versus the initial and discharge VAS and the change in the VAS, the correlation coefficients (R 2 ) were 0.03 (p = 0.17), 0.08 (p = 0.04), and 0.001 (p = 0.87), respectively. Conclusion: For ED patients with musculoskeletal back pain, we found no statistically significant correlation between days out of work and ED pain scores. Background: Conducted Electrical Weapons (CEWs) are common law enforcement tools used to subdue and repel violent subjects and, therefore, prevent further injury or violence from occurring in certain situations. The TASER X2 is a new generation of CEW that has the capability of firing two cartridges in a ''semi-automatic'' mode, and has a different electrical waveform and different output characteristics than older generation technology. There have been no data presented on the human physiologic effects of this new generation CEW. Objectives: The objective of this study was to evaluate the human physiologic effects of this new CEW. Methods: This was a prospective, observational study of human subjects. An instructor shot subjects in the abdomen and upper thigh with one cartridge, and subjects received a 10-second exposure from the device. Measured variables included: vital signs, continuous spirometry, pre-and post-exposure ECG, intra-exposure echocardiography, venous pH, lactate, potassium, CK, and troponin. Results: Ten subjects completed the study (median age 31.5, median BMI 29.4, 80% male). There were no important changes in vital signs or in potassium. The median increase in lactate during the exposure was 1.2, range 0.6 to 2.8. The median change in pH was )0.031, range )0.011 to 0.067. No subject had a clinically relevant ECG change, evidence of cardiac capture, or positive troponin up to 24 hours after exposure. The median change in creatine kinase (CK) at 24 hours was 313, range )40 to 3418. There was no evidence of impairment of breathing by spirometry. Baseline median minute ventilation was 14.2, which increased to 21.6 during the exposure (p = 0.05), and remained elevated at 21.6 post-exposure (p = 0.01). Conclusion: We detected a small increase in lactate and decrease in pH during the exposure, and an increase in CK 24 hours after the exposure. The physiologic effects of the X2 device appear similar to previous reports for ECD devices. Use Background: Public bicycle sharing (bikeshare) programs are becoming increasingly common in the US and around the world. These programs make bicycles easily accessible for hourly rental to the public. There are currently 15 active bikeshare programs in cities in the US, and more than 30 programs are being developed in cities including New York and Chicago. Despite the importance of helmet use, bikeshare programs do not provide the opportunity to purchase or rent helmets. While the programs encourage helmet use, no helmets are provided at the rental kiosks. Objectives: We sought to describe the prevalence of helmet use among adult users of bikeshare programs and users of personal bicycles in two cities with recently introduced bicycle sharing programs (Boston, MA and Washington, DC). Methods: We performed a prospective observational study of bicyclists in Boston, MA and Washington, DC. Trained observers collected data during various times of the day and days of the week. Observers recorded the sex of the bicycle operator, type of bicycle, and helmet use. All bicycles that passed a single stationary location in any direction for a period of between 30 and 90 minutes were recorded. Data are presented as frequencies of helmet use by sex, type of bicycle (bikeshare or personal), time of the week (weekday or weekend), and city. Logistic regression was used to estimate the odds ratio for helmet use controlling for type of bicycle, sex, day of week, and city. Results: There were 43 observation periods in two cities at 36 locations. 3,073 bicyclists were observed. There were 562 (18.2%) bicylists riding bikeshare bicycles. Overall helmet use was 45.5%, although helmet use varied significantly with sex, day of use, and type of bicycle (see figure) . Bikeshare users were helmeted at a lower rate compared to users of personal bicycles (19.2% vs 51.4%). Logistic regression, controlling for type of bicycle, sex, day of week, and city demonstrate that bikeshare users had higher odds of riding unhelmeted (OR 4.34, 95% CI 3.47-5.50). Women had lower odds of riding unhelmeted (OR 0.62, 0.52-0.73), while weekend riders were more likely to ride unhelmeted (OR 1.32, 1.12-1.55). Conclusion: Use of bicycle helmets by users of public bikeshare programs is low. As these programs become more popular and prevalent, efforts to increase helmet use among users should increase. Background: Abusive head trauma (AHT) represents one of the most severe forms of traumatic brain injury (TBI) among abused infants with 30% mortality. Young adult males account for 75% of the perpetrators. Most AHT prevention programs are hospital-based and reach a predominantly female audience. There are no published reports of school-based AHT prevention programs to date. Objectives: 1. To determine whether a high schoolbased AHT educational program will improve students' knowledge of AHT and parenting skills. 2. To evaluate the feasibility and acceptability of a school-based AHT prevention program. Methods: This program was based on an inexpensive commercially available program developed by the National Center on Shaken Baby Syndrome. The program was modified to include a 60-minute interactive presentation that teaches teenagers about AHT, parenting skills, and caring for inconsolable crying infants. The program was administered in three high schools in Flint, Michigan during spring 2011. Student's knowledge was evaluated with a 17-item written test administered pre-intervention, post-intervention, and two months after program completion. Program feasibility and acceptability were evaluated through interviews and surveys with Flint area school social workers, parent educators, teachers, and administrators. Results: In all, 342 high school students (40% male) participated. Of these, 317 (92.7%) completed the pretest and post-test with 171 (50%) completing the twomonth follow-up test. The mean pre-intervention, postintervention, and two-month follow-up scores were 53%, 87%, and 90% respectively. From pre-test to posttest, mean score improved 34%, p < 0.001. This improvement was even more profound in young males, whose mean post-test score improved by 38%, p < 0.001. Of the 69 participating social workers, parent educators, teachers, and administrators, 97% ranked the program as feasible and acceptable. Conclusion: Students participating in our program showed an improvement in knowledge of AHT and parenting skills which was retained after two months. Teachers, social workers, parent educators, and school administrators supported the program. This local pilot program has the potential to be implemented on a larger scale in Michigan with the ultimate goal of reducing AHT amongst infants. Will Background: Fear of litigation has been shown to affect physician practice patterns, and subsequently influence patient care. The likelihood of medical malpractice litigation has previously been linked with patient and provider characteristics. One common concern is that a patient may exaggerate symptoms in order to obtain monetary payouts; however, this has never been studied. Objectives: We hypothesize that patients are willing to exaggerate injuries for cash settlements and that there are predictive patient characteristics including age, sex, income, education level, and previous litigation. Methods: This prospective cross-sectional study spanned June 1 to December 1, 2011 in a Philadelphian urban tertiary care center. Any patient medically stable enough to fill out a survey during study investigator availability was included. Two closed-ended paper surveys were administered over the research period. Standard descriptive statistics were utilized to report incidence of: patients who desired to file a lawsuit, patients previously having filed lawsuits, and patients willing to exaggerate the truth in a lawsuit for a cash settlement. Chi-square analysis was performed to determine the relationship between patient characteristics and willingness to exaggerate injuries for a cash settlement. Results: Of 126 surveys, 11 were excluded due to incomplete data, leaving 115 for analysis. The mean age was 39 with a standard deviation of 16, and 40% were male. The incidence of patients who had the desire to sue at the time of treatment was 9%. The incidence of patients who had filed a lawsuit in the past was 35%. Of those patients, 26% had filed multiple lawsuits. Fifteen percent [95% CI 9-23%] of all patients were willing to exaggerate injuries for cash settlement. Sex and income were found to be statistically significant predictors of willingness to exaggerate symptoms: 22% of females vs. 4% of males were willing to exaggerate (p = 0.01), and 20% of people with income less than $100,000/yr vs. 0% of those with income over $100,000/ yr were willing to exaggerate (p = 0.03). Conclusion: Patients at a Philadelphian urban tertiary center admit to willingness to exaggerate symptoms for a cash settlement. Willingness to exaggerate symptoms is associated with female sex and lower income. Background: Current data suggest that as many as 50% of patients presenting to the ED with syncope leave the hospital without a defined etiology. Prior studies have suggested a prevalence of psychiatric disease as high as 26% in patients with syncope of unknown etiology. Objectives: To determine whether psychiatric disease and substance abuse are associated with an increased incidence of syncope of unknown etiology. Methods: Prospective, observational, cohort study of consecutive ED patients ‡18 presenting with syncope was conducted between 6/03 and7/06. Patients were queried in the ED and charts reviewed about a history of psychiatric disease, use of psychiatric medication, substance abuse, and duration. Data were analyzed using SAS with chi-square and Fisher's exact tests. Results: We enrolled 519 patients who presented to the ED after syncope, 159 of whom did not have an identifiable etiology for their syncopal event. 36.5% of those without an identifiable etiology were male. 166 (32%) patients had a history of or current psychiatric disease (42% male), and 55 patients (11%) had a history of or current substance abuse (60% male). Among males with psychiatric disease, 39% had an unknown etiology of their syncopal event, compared to 22% of males without psychiatric disease (p = 0.009). Similarly, among all males with a history of substance abuse, 45% had an unknown etiology, as compared to 24% of males without a history of substance abuse (p = 0.01). A similar trend was not identified in elderly females with psychiatric disease (p = 0.96) or substance abuse (p = 0.19). However, syncope of unknown etiology was more common among both men and women under age 65 with a history of substance abuse (47%) compared to those without a history of substance abuse (27%; p = 0.01). Conclusion: Our results suggest that psychiatric disease and substance abuse are associated with increased incidence of syncope of unknown etiology. Patients evaluated in the ED or even hospitalized with syncope of unknown etiology may benefit from psychiatric screening and possibly detoxification referral. This is particularly true in men. (Originally submitted as a ''late-breaker.'') Scope Background: After discharge from an emergency department (ED), pain management often challenges parents, who significantly under-treat their children's pain. Rapid patient turnover and anxiety make education about home pain treatment difficult in the ED. Video education standardizes information and circumvents insufficient time and literacy. Objectives: To evaluate the effectiveness of a 6-minute instructional video for parents that targets common misconceptions about home pain management. Methods: We conducted a randomized, double-blinded clinical trial of parents of children ages 1-18 years who presented with a painful condition, were evaluated, and discharged home in June and July 2011. Parents were randomized to a pain management video or an injury prevention control video. Primary outcome was the proportion of parents who gave pain medication at home. These data were recorded in a home pain diary and analyzed using a chi-square test. Parents' knowledge about pain treatment was tested before, immediately following, and 2 days after intervention. McNemar's test statistic determined odds that knowledge correlated with the intervention group. Results: 100 parents were enrolled: 59 watched the pain education video, and 41 the control video. 72.9% completed follow up, providing information about home pain education use. Significantly more parents provided at least one dose of pain medication to their children after watching the educational video: 96% vs. 80% (difference 16%, 95% CI 7.8%, 31.3%). The odds the parent had correct knowledge about pain treatment significantly improved immediately following the educational video for knowledge about pain scores (p = 0.04), the effect of pain on function (p < 0.01), and pain medication misconceptions (p < 0.01). These significant differences in knowledge remained 3 days after the video intervention. The educational video about home pain treatment viewed by parents significantly increased the proportion of children receiving pain medication at home and significantly improved knowledge about at-home pain management. Videos are an efficient tool to provide medical advice to parents that improves outcomes for children. Methods: This was a prospective, observational study of consecutive admitted CPU patients in a large-volume academic urban ED. Cardiology attendings round on all patients and stress test utilization is driven by their recommendation. Eligibility criteria include: age>18, AHA low/intermediate risk, nondynamic ECGs, and normal initial Troponin I. Patients >75 and with a history of CAD or co-existing active medical problem were excluded. Based on prior studies and our estimated CPU census and demographic distribution, we estimated a sample size of 2,242 patients in order to detect a difference in stress utilization of 7% (2-tailed, a = 0.05, b = 0.8). We calculated a TIMI risk prediction score and a Diamond & Forrester (D&F) CAD likelihood score on each patient. T-tests were used for univariate comparisons of demographics, cardiac comorbidities, and risk scores. Logistic regression was used to estimate odds ratios (ORs) for receiving testing based on race, controlling for insurance and either TIMI or D&F score. Results: Over 18 months, 2,451 patients were enrolled. Mean age was 53 ± 12, and 54% (95% CI 52-56) were female. Sixty percent (95% CI 58-62) were Caucasian, 12% (95% CI 10-13) African American, and 24% (95% CI 23-26) Hispanic. Mean TIMI and D&F scores were 0.5 (95% CI 0.5-0.6) and 38% (95% CI 37-39). The overall stress testing rate was 52% (95% CI 50-54). After controlling for insurance status and TIMI or D&F scores, African American patients had significantly decreased odds of stress testing (OR TIMI 0.67 (95% CI 0.52-0.88), OR D&F 0.68 (95% CI 0.51-0.89)). Hispanics had significantly decreased odds of stress testing in the model controlling for D&F (OR D&F 0.78 (95% CI 0.63-0.98)). Conclusion: This study confirms that disparities in the workup of African American patients in the CPU are similar to those found in the general ED and the outpatient setting. Further investigation into the specific provider or patient level factors contributing to this bias is necessary. The Outcomes for HF and COPD were SAE 11.6%, 7.8%; death 2.3%, 1.0%. We found univariate associations with SAE for these walk test components: too ill to walk (both HF, COPD P < 0.0001); highest heart rate ‡110 (HF P = 0.02, COPD P = 0.10); lowest SaO 2 < 88% (HF P = 0.42, COPD P = 0.63); Borg score ‡5 (HF P = 0.47, COPD P = 0.52); walk test duration £ 1 minute (HF P = 0.07. COPD P = 0.22). After adjustment for multiple clinical covariates with logistic regression analyses, we found ''walk test heart rate ‡110'' had an odds ratio of 1.9 for HF patients and ''too ill to start the walk test'' had an odds ratio of 3.5 for COPD patients. Conclusion: We found the 3-minute walk test to be easy to administer in the ED and that maximum heart rate and inability to start the test were highly associated with adverse events in patients with exacerbations of HF and COPD, respectively. We suggest that the 3-minute walk test be routinely incorporated into the assessment of HF and COPD patients in order to estimate risk of poor outcomes. The Objectives: The objective of this study was to investigate differences in consent rates between patients of different demographic groups who were invited to participate in minimal-risk clinical trials conducted in an academic emergency department. Methods: This descriptive study analyzed prospectively collected data of all adult patients who were identified as qualified participants in ongoing minimal risk clinical trials. These trials were selected for this review because they presented minimal factors known to be associated Background: Increasing rates of patient exposure to computerized tomography (CT) raise questions about appropriateness of utilization, as well as patient awareness of radiation exposure. Despite rapid increases in CT utilization and published risks, there is no national standard to employ informed consent prior to radiation exposure from diagnostic CT. Use of written informed consent for CT (ICCT) in our ED has increased patient understanding of the risks, benefits, and alternatives to CT imaging. Our team has developed an adjunct video educational module (VEM) to further educate ED patients about the CT procedure. Objectives: To assess patient knowledge and preferences regarding diagnostic radiation before and after viewing VEM. Methods: The VEM was based on ICCT currently utilized at our tertiary care ED (census 37,000 patients/ year). ICCT is written at an 8th grade reading level. This fall, VEM/ICCT materials were presented to a convenience sample of patients in the ED waiting room 9 AM-7 PM, Monday-Sunday. Patients who were <18 years of age, critically ill, or with language barrier were excluded. To quantify the educational value of the VEM, a six-question pretest was administered to assess baseline understanding of CT imaging. The patients then watched the VEM via iPad (Macintosh) and reviewed the consent form. An eight-question post-test was then completed by each subject. No PHI were collected. Pre-and post-test results were analyzed using McNemar's test for individual questions and a paired t-test for the summed score (SAS version 9.2). Results: 100 patients consented and completed the survey. The average pre-test score for subjects was poor, 66% correct. Review of VEM/ICCT materials increased patient understanding of medical radiation as evidenced by improved post-test score to 79%. Mean improvement between tests was 13% (p < 0.0001). 78% of subjects responded that they found the materials helpful, and that they would like to receive ICCT. Conclusion: The addition of a video educational module improved patient knowledge regarding CT imaging and medical radiation as quantified by pre-and posttesting. Patients in our study sample reported that they prefer to receive ICCT. By educating patients about the risks associated with CT imaging, we increase informed, shared decision making -an essential component of patient-centered care. Does Objectives: We sought to determine the relationship between patients' pain scores and their rate of consent to ED research. We hypothesized that patients with higher pain scores would be less likely to consent to ED research. Methods: Retrospective observational cohort study of potential research subjects in an urban academic hospital ED with an average annual census of approximately 70,000 visits. Subjects were adults older than 18 years with chief complaint of chest pain within the last 12 hours, making them eligible for one of two cardiac biomarker research studies. The studies required only blood draws and did not offer compensation. Two reviewers extracted data from research screening logs. Patients were grouped according to pain score at triage, pain score at the time of approach, and improvement in pain score (triage score -approach score). The main outcome was consent to research. Simple proportions for consent rates by pain score tertiles were calculated. Two multivariate logistic regression analyses were performed with consent as outcome and age, race, sex, and triage or approach pain score as predictors. Results: Overall, 396 potential subjects were approached for consent. Patients were 58% Caucasian, 49% female, and with an average age of 57 years. Six patients did not have pain scores recorded at all and 48 did not have scores documented within 2 hours of approach and were excluded from relevant analyses. Overall, 80.1% of patients consented. Consent rates by tertiles at triage, at time of approach, and by pain score improvement are shown in Tables 1 and 2. After adjusting for age, race, and sex, neither triage (p = 0.75) nor approach (p = 0.65) pain scores predicted consent. Conclusion: Research enrollment is feasible even in ED patients reporting high levels of pain. Patients with modest improvements in pain levels may be more likely to consent. Future research should investigate which factors influence patients' decisions to participate in ED research. Conclusion: In this multicenter study of children hospitalized with bronchiolitis neither specific viruses nor their viral load predicted the need for CPAP or intubation, but young age, low birth weight, presence of apnea, severe retractions, and oxygen saturation <85% did. We also identified that children requiring CPAP or intubation were more likely to have mothers who smoked during pregnancy and a rapid respiratory worsening. Mechanistic research in these high-risk children may yield important insights for the management of severe bronchiolitis. Brigham & Women's Hospital, Boston, MA Background: Siblings and children who share a home with a physically abused child are thought to be at high risk for abuse. However, rates of injury in these children are unknown. Disagreements between medical and Child Protective Services professionals are common and screening is highly variable. Objectives: Our objective was to measure the rates of occult abusive injuries detected in contacts of abused children using a common screening protocol. Methods: This was a multi-center, observational cohort study of 20 child abuse teams who shared a common screening protocol. Data were collected between Jan 15, 2010 and April 30, 2011 for all children <10 years undergoing evaluation for physical abuse and their contacts. For contacts of abused children, the protocol recommended physical examination for all children <5 years, skeletal survey and physical exam for children <24 months, and physical exam, skeletal survey, and neuroimaging for children <6 months old. Results: Among 2,825 children evaluated for abuse, 618 met criteria as ''physically abused'' and these had 477 contacts. For each screening modality, screening was completed as recommended by the protocol in approximately 75% of cases. Of 134 contacts who met criteria for skeletal survey, new injuries were identified in 16 (12.0%). None of these fractures had associated findings on physical examination. Physical examination identified new injuries in 6.2% of eligible contacts. Neuroimaging failed to identify new injuries among 25 eligible contacts less than 6 months old. Twins were at significantly increased risk of fracture relative to other nontwin contacts (OR 20.1). Conclusion: These results support routine skeletal survey for contacts of physically abused children <24 months old, regardless of physical examination findings. Even for children where no injuries are identified, these results demonstrate that abuse is common among children who share a home with an abused child, and support including contacts in interventions (foster care, safety planning, social support) designed to protect physically abused children. Methods: This was a retrospective study evaluating all children presenting to eight paediatric, universityaffiliated EDs during one year in 2010-2011. In each setting, information regarding triage and disposition were prospectively registered by clerks in the ED database. Anonymized data were retrieved from the ED computerized database of each participating centre. In the absence of a gold standard for triage, hospitalisation, admission to intensive care unit (ICU), length of stay in the ED, and proportion of patients who left without being seen by a physician (LWBS) were used as surrogate markers of severity. The primary outcome measure was the association between triage level (from 1 to 5) and hospitalisation. The association between triage level and dichotomous outcomes was evaluated by a chi-square test, while a Student's t-test was used to evaluate the association between triage level and length of stay. It was estimated that the evaluation of all children visiting these EDs for a one year period would provide a minimum of 1,000 patients in each triage level and at least 10 events for outcomes having a proportion of 1% or more. Results: A total of 404,841 children visited the eight EDs during the study period. Pooled data demonstrated hospitalisation proportions of 59%, 30%, 10%, 2%, and 0.5% for patients triaged at level 1, 2,3, 4, and 5 respectively (p < 0.001). There was also a strong association between triage levels and admission to ICU (p < 0.001), the proportion of children who LWBS (p < 0.001), and length of stay (p < 0.001). Background: Parents frequently leave the emergency department (ED) with incomplete understanding of the diagnosis and plan, but the relationship between comprehension and post-care outcomes has not been well described. Objectives: To explore the relationship between comprehension and post-discharge medication safety. Methods: We completed a planned secondary analysis of a prospective observational study of the ED discharge process for children aged 2-24 months. After discharge, parents completed a structured interview to assess comprehension of the child's condition, the medical team's advice, and the risk of medication error. Limited understanding was defined as a score of 3-5 from 1 (excellent) to 5 (poor). Risk of medication error was defined as a plan to use over-the-counter cough/cold medication and/or an incorrect dose of acetaminophen (measured by direct observation at discharge or reported dose at follow-up call). Parents identified as at risk received further instructions from their provider. The primary outcome was persistent risk of medication error assessed at phone interview 5-10 days post-discharge. A major barrier to administering analgesics to children is the perceived discomfort of intravenous access. The delivery of intranasal analgesia may be a novel solution to this problem. Objectives: We investigated whether the addition of the Mucosal Atomizer Device (MAD) as an alternative for fentanyl delivery would improve overall fentanyl administration rates in pediatric patients transported by a large urban EMS system. We performed a historical control trial comparing the rate of pediatric fentanyl administration 6 months before and 6 months after the introduction of the MAD. Study subjects were pediatric trauma patients (age <16 years) transported by a large urban EMS agency. The control group was composed of patients treated in the 6 months before introduction of the MAD. The experimental group included patients treated in the 6 months after the addition of the MAD. Two physicians reviewed each chart and determined whether the patient met predetermined criteria for the administration of pain medication. A third reviewer resolved any discrepancies. Fentanyl administration rates were measured and compared between the two groups. We used two-sample t-tests and chi-square tests to analyze our data. Results: 228 patients were included in the study: 137 patients in the pre-MAD group and 91 in the post-MAD group. There were no significant differences in the demographic and clinical characteristics of the two groups. 42 (30.4%) patients in the control arm received fentanyl. 34 (37.8%) of patients in the experimental arm received fentanyl with 36% of the patients receiving fentanyl via the intranasal route. The addition of the MAD was not associated with a statistically significant increase in analgesic administration. Age and mechanism of injury were statistically more predictive of analgesia administration. Conclusion: While the addition of the Mucosal Atomizer Device as an alternative delivery method for fentanyl shows a trend towards increased analgesic administration in a prehospital pediatric population, age and mechanism of injury are more predictive in who receives analgesia. Further research is necessary to investigate the effect of the MAD on pediatric analgesic delivery. Methods: This was a prospective study evaluating PHP-SE before (pre) and after (post) a PPP introduction and 13 months later (13-mo). PHP groups received either PPP review and education or PPP review alone. The PPP included a pain assessment tool. The SE tool, developed and piloted by pediatric EMS experts, uses a ranked ordinal scale ranging from 'certain I cannot do it' (0) to 'completely certain I can do it' (100) for 10 items: pain assessment (3 items), medication administration (4) and dosing (1) , and reassessment (2). All 10 items and an averaged composite were evaluated for three age groups (adult, child, toddler). Paired sample t-tests compared post-and 13-mo scores to pre-PPP scores. Results: Of 264 PHPs who completed initial surveys, 146 PHPs completed 13-mo surveys. 106 (73%) received education and PPP review and 40 (27%) review only. PPP education did not affect PHP-SE (adult P = 0.87, child P = 0.69, toddler P = 0.84). The largest SE increase was in pain assessment. This increase persisted for child and toddler groups at 13 months. The immediate increase in composite SE scores for all age groups persisted for the toddler group at 13 months. Conclusion: Increases in composite and pain assessment PHP-SE occur for all age groups immediately after PPP introduction. The increase in pain assessment SE persisted at 13 months for pediatric age groups. Composite SE increase persisted for the toddler age group alone. Background: Pediatric medications administered in the prehospital setting are given infrequently and dosage may be prone to error. Calculation of dose based on known weight or with use of length-based tapes occurs even less frequently and may present a challenge in terms of proper dosing. Objectives: To characterize dosing errors based on weight-based calculations in pediatric patients in two similar emergency medical service (EMS) systems. Methods: We studied the five most commonly administered medications given to pediatric patients weighing 36 kg or less. Drugs studied were morphine, midazolam, epinephrine 1:10,000, epinephrine 1:1000, and diphenhydramine. Cases from the electronic record were studied for a total of 19 months, from January 2010 to July 2011. Each drug was administered via intravenous, intramuscular, or intranasal routes. Drugs that were permitted to be titrated were excluded. An error was defined as greater than 25% above or below the recommended mg/kg dosage. Results: Out of 248,596 total patients, 13,321 were pediatric patients. 7885 had documented weights of <36 kg and 241 patients were given these medications. We excluded 72 patients for weight above the 97%ile or below the 3%ile, or if the weight documentation was missing. Of the 169 patients and 187 doses, errors were noted in 53 (28%; 95% CI 22%, 35%). Midazolam was the most common drug in errors (29 of 53 doses or 55%; 95% CI 40%, 68%), followed by diphenhydramine (11/53 or 21%; 95% CI 11%, 34%), epinephrine (7/53 or 13%; 95% CI 5%, 25%), and morphine sulfate (6/53 or 11%; 95% CI, 4%, 23%). Underdosing was noted in 34 of 53 (64%; 95% CI 50%, 77%) of errors, while excessive dosing was noted in 19 of 53 (36%; 95% CI 23%, 50%). Conclusion: Weight-based dosing errors in pediatric patients are common. While the clinical consequences of drug dosing errors in these patients are unknown, a considerable amount of inaccuracy occurs. Strategies beyond provision of reference materials are needed to prevent pediatric medication errors and reduce the potential for adverse outcomes. Drivers Background: Homelessness affects up to 3.5 million people a year. The homeless present more frequently to EDs, their ED visits are four times more likely to occur within 3 days of a prior ED evaluation, and they are admitted up to five times more frequently than others. We evaluated the effect of a Street Outreach Rapid Response Team (SORRT) on the health care utilization of a homeless population. A nonmedical outreach staff responds to the ED and intensely case manages the patient: arranges primary care follow-up, social services, temporary housing opportunities, and drug/ alcohol rehabilitation services. Objectives: We hypothesized that this program would decrease the ED visits and hospital admissions of this cohort of patients. Methods: Before and after study at an urban teaching hospital from June, 2010-December, 2011 in Indianapolis, Indiana. Upon identification of homeless status, SORRT was immediately notified. Eligibility for SORRT enrollment is determined by Housing and Urban Development homeless criteria and the outreach staff attempted to enter all such identified patients into the program. The patients' health care utilization was evaluated in the 6 months prior to program entry as compared to the 6 months after enrollment by prospectively collecting data and a retrospective medical record query for any unreported visits. Since the data were highly skewed, we used the nonparametric signed rank test to test for paired differences between periods. Results: 22 patients met criteria but two refused participation. The 20-patient cohort had 388 total ED visits (175 pre and 213 post) with a mean of 8.8 (SD 10.1) and median of 6.5 (range 1-44) ED visits in 6 months pre-SORRT as compared to a mean of 10.7 (SD 19.5) and median of 5.0 (0-90) in 6 months post-SORRT (p = 0.815). There were 28 total inpatient admissions pre-intervention and 27 post-intervention, with a mean of 1.4 (SD 2.0) and median of 0.5 (0.7) per patient in the pre-intervention period as compared to 1.4 (SD 1.9) and 1.0 (0-6) in the post-intervention period (p = 0.654). In the pre-SORRT period 50.0% had at least one inpatient admission as compared to 55.0% post-SORRT (p = 1.00). There were no differences in ICU days or overall length of stay between the two periods. Conclusion: An aggressive case management program beginning immediately with homeless status recognition in the ED has not demonstrated success in decreasing utilization in our population. Methods: This was a secondary analysis of a prospective randomized trial that included consenting patients discharged with outpatient antibiotics from an urban county ED with an annual census of 100,000. Patients unable to receive text messages or voice-mails were excluded. Health literacy was assessed using a validated health literacy assessment, the Newest Vital Sign (NVS). Patients were randomized to a discharge instruction modality: 1) standard care, typed and verbal medication and case-specific instructions; 2) standard care plus text-messaged instructions sent to the patient's cell phone; or 3) standard care plus voice-mailed instructions sent to the patient's cell. Patients were called at 30 days to determine preference for instruction delivery modality. Preference for discharge instruction modality was analyzed using z-tests for proportions. Results: 758 patients were included (55% female, median age 30, range 5 months to 71 years); 98 were excluded. 23% had an NVS score of 0-1, 31% 2-3, and 46% 4-6. Among the 51.1% of participants reached at 30 days, 26% preferred a modality other than written. There was a difference in the proportion of patients who preferred discharge instructions in written plus another modality (see table) . With the exception of written plus another modality, patient preference was similar across all NVS score groups. Conclusion: In this sample of urban ED patients, more than one in four patients prefer non-traditional (text message, voice-mail) modalities of discharge instruction delivery to standard care (written) modality alone. Additional research is needed to evaluate the effect of instructional modality on accessibility and patient compliance. Figure) . Conclusion: Cumulative SAPS II scoring fails to predict mortality in OHCA. The risk scores assigned to age, GCS, and HCO 3 independently predict mortality and combined are good mortality predictors. These findings suggest that an alternative severity of illness score should be used in post-cardiac arrest patients. Future studies should determine optimal risk scores of SAPS II variables in a larger cohort of OHCA. Objectives: To determine the extent to which CPP recovers to pre-pause levels with 20 seconds of CPR after a 10-second interruption in chest compressions for ECG rhythm analysis. Methods: This was a secondary analysis of prospectively collected data from an IACUC-approved protocol. Fortytwo Yorkshire swine (weighing 25-30 kg) were instrumented under anesthesia. VF was electrically induced. After 12 minutes of untreated VF, CPR was initiated and a standard dose of epinephrine (SDE) (0.01 mg/kg) was given. After 2.5 minutes of CPR to circulate the vasopressor, compressions were interrupted for 10 seconds to analyze the ECG rhythm. This was immediately followed by 20 seconds of CPR to restore CPP before the first RS was delivered. If the RS failed, CPR resumed and additional vasopressors (SDE, and vasopressin 0.57 mg/kg) were given and the sequence repeated. The CPP was defined as aortic diastolic pressure minus right atrial diastolic pressure. The CPP values were extracted at three time points: immediately after the 2.5 minutes of CPR, following the 10-second pause, and immediately before defibrillation for the first two RS attempts in each animal. Eighty-three sets of measurements were logged from 42 animals. Descriptive statistics were used to analyze the data. In most cities, the proportion of patients who achieve prehospital return of spontaneous circulation (ROSC) is less than 10%. The association between time of day and OHCA outcomes in the prehospital setting is unknown. Objectives: We sought to determine whether rates of prehospital ROSC varied by time of day. We hypothesized that night OHCAs would exhibit lower rates of ROSC. Methods: We performed a retrospective review of cardiac arrest data from a large, urban EMS system. Included were all OHCAs occurring in individuals >18 years of age from 1/1/2008 to 12/31/2010. Excluded were traumatic arrests and cases where resuscitation measures were not performed. Day was defined as 7:00 am-6:59 pm, while night was 7:00 pm-6:59 am. We examined the association between time of day and paramedic-perceived prehospital ROSC in unadjusted and adjusted analyses. Variables included age, sex, race, presenting rhythm, AED application by a bystander or first responder, defibrillation, and bystander CPR performance. Analyses were performed using chisquare tests and logistic regression. Objectives: Determine whether a SMEI helps to improve physician compliance with IHI bundle and reduce patient mortality in ED patients with S&S. Methods: We conducted a pre-SMEI retrospective review of four months of ED patients with S&S to determine baseline pre-SMEI physician compliance and patient mortality. We designed and completed a SMEI attended by 25 of 28 ED attending physicians and 28 of 30 ED resuscitation residents. Finally, we conducted a twenty-month post-SMEI prospective study of ongoing physician compliance and patient mortality in ED patients with S&S. Results: In the four month pre-SMEI retrospective review, we identified 23 patients with S&S, with a 61% physician overall compliance and mortality rate of 30%. The average ED physician SMEI multiple-choice pre-test score was 74%, and showed a significant improvement in the post-test score of 94% (p = 0.0003). Additionally, 87% of ED physicians were able to describe three new clinical pearls learned and 85% agreed that the SMEI would improve compliance. In the twenty months of the post-SMEI prospective study, we identified 144 patients with S&S, with a 75% physician overall compliance, and mortality rate of 21%. Relative physician compliance improved 23% (p = 0.0001) and relative patient mortality was reduced by 32% (p < 0.0001) when comparing pre-and post-SMEI data. Conclusion: Our data suggest that a SMEI improves overall physician compliance with the six hour goals of the IHI bundle and reduces patient mortality in ED patients with S&S. Conclusion: Using a population-level, longitudinal, and multi-state analysis, the rate of return visits within 3 days is higher than previously reported, with nearly 1 in 12 returning back to the ED. We also provide the first estimation of health care costs for ED revisits. Background: The ability of patients to accurately determine their level of urgency is important in planning strategies that divert away from EDs. In fact, an understanding of patient self-triage abilities is needed to inform health policies targeting how and where patients access acute care services within the health care system. Objectives: To determine the accuracy of a patient's self-assessment of urgency compared against triage nurses. Methods: Setting: ED patients are assigned a score by trained nurses according to the Canadian Emergency Department Triage and Acuity Scale (CTAS). We present a cross-sectional survey of a random patient sample from 12 urban/regional EDs conducted during the winters of 2007 and 2009. This previously validated questionnaire, based on the British Healthcare Commission Survey, was distributed according to a modified Dillman protocol. Exclusion criteria consisted of: age 0-15 years, left prior to being seen/treated, died during ED visit, no contact information, presented with a privacy-sensitive case. Alberta Health Services provided linked non-survey administrative data. Results: 21,639 surveys distributed with a response rate of 46%. Patients rated health problems as life-threatening (6%), possibly life-threatening (22%), urgent (30%), somewhat urgent (37%), or not urgent (5%). Triage nurses assigned the same patients CTAS scores of I (<1%), II (20%), III (45%), IV (29%) or V (5%). Patients self-rated their condition as 3 or 4 points less urgent than the assigned CTAS score (<1% of the time), 2 points less urgent (5%), 1 point less urgent (25%), exactly as urgent (38%), 1 point more urgent (24%), 2 points more urgent (7%), or 3 or 4 points more urgent (1%, respectively). Among CTAS I or II patients, 54% described their problem as life-threatening/possibly life-threatening, 26% as urgent (risk of permanent damage), 18% as urgent (needed to be seen that day), and 2% as not urgent (wanted to be but did not need to be seen that day). Conclusion: The majority of ED patients are generally able to accurately assess the acuity of their problem. Encouraging patients with low-urgency conditions to self-triage to lower-acuity sources of care may relieve stress on EDs. However, physicians and patients must be aware that a small minority of patients are unable to self-triage safely. When the tourniquet was released, blood spurted from the injured artery as hydrostatic pressure decayed. Pressure and flow were recorded in three animals (see table) . The concept was proof-tested in a single fresh frozen human cadaver with perfusion through the femoral artery and hemorrhage from the popliteal artery. The results were qualitatively and quantitatively similar to the swine carcass model. Conclusion: A perfused swine carcass can simulate exsanguinating hemorrhage for training purposes and serves as a prototype for a fresh-frozen human cadaver model. Additional research and development are required before the model can be widely applied. Background: In the pediatric emergency department (PED), clinicians must work together to provide safe and effective care. Crisis resource management (CRM) principles have been used to improve team performance in high-risk clinical settings, while simulation allows practice and feedback of these behaviors. Objectives: To develop a multidisciplinary educational program in a PED using simulation-enhanced teamwork training to standardize communication and behaviors and identify latent safety threats. Methods: Over 6 months a workgroup of physicians and nurses with experience in team training and simulation developed an educational program for clinical staff of a tertiary PED. Goals included: create a didactic curriculum to teach the principles of CRM, incorporate principles of CRM into simulation-enhanced team training in-situ and center-based exercises, and utilize assessment instruments to evaluate for teamwork, completion of critical actions, and presence of latent safety threats during in-situ SIM resuscitations. Results: During Phase I, 130 clinicians, divided into teams, participated in 90-minute pre-training assessments of PALS-based in-situ simulations. In Phase II, staff participated in a 6-hour curriculum reviewing key CRM concepts, including team training exercises utilizing simulation and expert debriefing. In Phase III, staff participated in post-training 90 minute teamwork and clinical skills assessments in the PED. In all phases, critical action checklists (CAC) were tabulated by simulation educators. In-situ simulations were recorded for later review using the assessment tools. After each simulation, educators facilitated discussion of perceptions of teamwork and identification of systems issues and latent hazards. Overall, 54 in-situ simulations were conducted capturing 97% of the physicians and 84% of the nurses. CAC data were collected by an observer and compared to video recordings. Over 20 significant systems issues, latent hazards, and knowledge deficits were identified. All components of the program were rated highly by 90% of the staff. Conclusion: A workgroup of PEM, simulation, and team training experts developed a multidisciplinary team training program that used in-situ and centerbased simulation and a refined CRM curriculum. Unique features of this program include its multidisciplinary focus, the development of a variety of assessment tools, and use of in-situ simulation for evaluation of systems issues and latent hazards. This program was tested in a PED and findings will be used to refine care and develop a sustainment program while addressing issues identified. Objectives: Our hypothesis is that participants trained on high-fidelity mannequins will perform better than participants trained on low-fidelity mannequins on both the ACLS written exam and in performance of critical actions during megacode testing. The study was performed in the context of an ACLS Initial Provider course for new PGY1 residents at the Penn Medicine Clinical Simulation Center and involved three training arms: 1) low fidelity (low-fi): Torso-Rhythm Generator; 2) mid-fidelity (mid-fi): Laerdal SimManÒ turned OFF; and 3) high-fidelity (high-fi): Laerdal SimManÒ turned ON. Training in each arm of the study followed standard AHA protocol. Educational outcomes were evaluated by written scores on the ACLS written examination and expert rater reviews of ACLS megacode videos performed by trainees during the course. A sample of 54 subjects were randomized to one of the three training arms: low-fi (n = 18), mid-fi (n = 18), or high-fi (n = 18). Results: Statistical significance across the groups was determined using analysis-of-variance (ANOVA). The three groups had similar written pre-test scores [low-fi 0.4 (0.1), mid-fi 0.5 (0.1), and high-fi 0.4 (0. 2)] and written post-test scores [low-fi 0.9 (0.1), mid-fi 0.9 (0.1), and high-fi 0.8 (0.1)]. Similarly, test improvement was not significantly different. After completion of the course, high-fi subjects were more likely to report they felt comfortable in their simulator environment (p = 0.005). Low-fi subjects were less likely to perceive a benefit in ACLS training from high-fi technology (p < 0.001). ACLS Instructors were not rated significantly different by the subjects using the Debriefing Assessment for Simulation in Healthcareª (DASH) student version except for element 6, where the high-fi group subjects reported lower scores (6.1 vs 6.6 and 6.7 in the other groups, p = 0.046). Objectives: We sought to determine if stress associated with the performance of a complex procedural task can be affected by level of medical training. Heart rate variability (HRV) is used as a measure of autonomic balance, and therefore an indicator of the level of stress. Methods: Twenty-one medical students and emergency medicine residents were enrolled. Participants performed airway procedures on an airway management trainer. HRV data were collected using a continuous heart rate variability monitoring system. Participant HRV was monitored at baseline, during the unassisted first attempt at endotracheal intubation, during supervised practice, and then during a simulated respiratory failure clinical scenario. Standard deviation of beat to beat variability (SDNN), very low frequency (VLF), total power (TP), and low frequency (LF) was analyzed to determine the effect of practice and level of training on the level of stress. A Cohen's d test was used to determine differences between study groups. Results: SDNN data showed that second-year residents were less stressed during all stages than were fourthyear medical students (avg d = 1.12). VLF data showed third-year residents exhibited less sympathetic activity than did first-year residents (avg d = )0.68). The opportunity to practice resulted in less stress for all participants. TP data showed that residents had a greater degree of control over their autonomic nervous system (ANS) than did medical students (avg d = 0.85). LF data showed that subjects were more engaged in the task at hand as the level of training increased indicating autonomic balance (avg d = 0.80). Conclusion: Our HRV data show that stress associated with the performance of a complex procedural task is reduced by increased training. HRV may provide a quantitative measure of physiologic stress during the learning process and thus serve as a marker of when a subject is adequately trained to perform a particular task. Objectives: We seek to examine whether intubation during CPR can be done as efficiently as intubation without ongoing CPR. The hypothesis is that the predictable movement of an automated chest compression device will make intubation easier than the random movement from manual CPR. Methods: The project was an experimental controlled trial and took place in the emergency department at a tertiary referral center in Peoria, Illinois. Emergency medicine residents, attendings, paramedics, and other ACLS trained staff were eligible for participation. In randomized order, each participant attempted intubation on a mannequin with no CPR ongoing, during CPR with a human compressor, and during CPR with an automatic chest compression device (Physio Control Lucas 2). Participants could use whichever style laryngoscope they felt most comfortable with and they were timed during the three attempts. Success was determined after each attempt. Results: There were 43 participants in the trial. The success rate in the control group and the automated CPR group were both 88% (38/43) and the success rate in the manual CPR group was 74% (32/43). The differences in success rates were not statistically significant (p = 0.99 and p = 0.83). The automated CPR group had the fastest average time (13.6 sec; p = 0.019). The mean times for intubation with manual CPR and no CPR were not statistically different (17.1 sec, 18.1 sec; p = 0.606). Conclusion: The success rate of tracheal intubation with ongoing chest compression was the same as the success rate of intubation without CPR. Although intubation with automatic chest compression was faster than during other scenarios, all methods were close to the 10 second timeframe recommended by ACLS. Based on these findings, it may not always be necessary to hold CPR to place a definitive airway; however, further studies will be needed. Background: After acute myocardial infarction, vascular remodeling in the peri-infarct area is essential to provide adequate perfusion, prevent additional myocyte loss, and aid in the repair process. We have previously shown that endogenous fibroblast growth factor 2 (FGF2) is essential to the recovery of contractile function and limitation of infarct size after cardiac ischemia-reperfusion (IR) injury. The role of FGF2 in vascular remodeling in this setting is currently unknown. Objectives: Determine the role of endogenous FGF2 in vascular remodeling in a clinically relevant, closed-chest model of acute myocardial infarction. Methods: Mice with a targeted ablation of the Fgf2 gene (Fgf2 knockout) and wild type controls were subjected to a closed-chest model of regional cardiac IR injury. In this model, mice were subjected to 90 minutes of occlusion of the left anterior descending artery followed by reperfusion for either 1 or 7 days. Immunofluorescence was performed on multiple histological sections from these hearts to visualize capillaries (endothelium, anti-CD31 antibody), larger vessels (venules and arterioles, antismooth muscle actin antibody), and nuclei (DAPI). Digital images were captured, and multiple images from each heart were measured for vessel density and vessel size. Results: Sham-treated Fgf2 knockout and wild type mice show no differences in capillary or vessel density suggesting no defect in vessel formation in the absence of endogenous FGF2. When subjected to closed-chest regional cardiac IR injury, Fgf2 knockout hearts had normal capillary and vessel number and size in the peri-infarct area after 1 day of reperfusion compared to wild type controls. However, after 7 days, Fgf2 knockout hearts showed significantly decreased capillary and vessel number and increased vessel size compared to wild type controls (p < 0.05). Conclusion: These data show the necessity of endogenous FGF2 in vascular remodeling in the peri-infarct zone in a clinically relevant animal model of acute myocardial infarction. These findings may suggest a potential role for modulation of FGF2 signaling as a therapeutic intervention to optimize vascular remodeling in the repair process after myocardial infarction. The Diagnosis of Aortic Dissections by ED Physicians is Rare Scott M. Alter, Barnet Eskin, John R. Allegra Morristown Medical Center, Morristown, NJ Background: Aortic dissection is a rare event. The most common symptom of dissection is chest pain, but chest pain is a frequent emergency department (ED) chief complaint and other diseases that cause chest pain, such as acute coronary syndrome and pulmonary embolism, occur much more frequently. Furthermore, 20% of dissections are without chest pain and 6% are painless. For all these reasons, diagnosing dissection can be difficult for the ED physician. We wished to quantify the magnitude of this problem in a large ED database. Objectives: Our goal was to determine the number of patients diagnosed by ED physicians with aortic dissections compared to total ED patients and to the total number of patients with a chest pain diagnosis. Methods: Design: Retrospective cohort. Setting: 33 suburban, urban, and rural New York and New Jersey EDs with annual visits between 8,000 and 75,000. Participants: Consecutive patients seen by ED physicians from January 1, 1996 through December 31, 2010. Observations: We identified aortic dissections using ICD-9 codes and chest pain diagnoses by examining all ICD-9 codes used over the period of the study and selecting those with a non-traumatic chest pain diagnosis. We then calculated the number of total ED patients and chest pain patients for every aortic dissection diagnosed by emergency physicians. We determined 95% confidence intervals (CIs). Results: From a database of 9.5 million ED visits, we identified 782 (0.0082%) aortic dissections, or one for every 12,200 (95% CI 11,400 to 13,100) visits. The mean age of aortic dissection patients was 58 ± 19 years and 57% were female. Of the total visits there were 763,000 (8%) with a chest pain diagnosis. Thus there is one aortic dissection diagnosis for every 980 (95% CI 910 to 1,050) chest pain diagnoses. Conclusion: The diagnosis of aortic dissections by ED physicians is rare. An ED physician seeing 3,000 to 4,000 patients a year would diagnose an aortic dissection approximately once every 3 to 4 years. An aortic dissection would be diagnosed once for approximately every 1,000 ED chest pain patients. Patients were excluded if they suffered a cardiac arrest, were transferred from another hospital, or if the CCL was activated for an inpatient or from EMS in the field. FP CCL activation was defined as 1) a patient for whom activation was cancelled in the ED and ruled out for MI or 2) a patient who went to catheterization but no culprit vessel was identified and MI was excluded. ECGs for FP patients were classified using standard criteria. Demographic data, cardiac biomarkers, and all relevant time intervals were collected according to an on-going quality assurance protocol. Results: A total of 506 CCL activations were reviewed, with 68% male, average age 57, and 59% black. There were 210 (42%) true STEMIs and 86 (17%) FP activations. There were no significant differences between the FP patients who did and did not have catheterization. For those FP patients who had a catheterization (13%), ''door to page'' and ''door to lab'' times were significantly longer than the STEMI patients (see table) , but there was substantial overlap. There was no difference in sex or age, but FP patients were more likely to be black (p = 0.02). A total of 82 FP patients had ECGs available for review; findings included anterior elevation with convex (21%) or concave (13%) elevation, ST elevation from prior anterior (10%) or inferior (11%) MI, pericarditis (16%), presumed new LBBB (15%), early repolarization (5%), and other (9%). Conclusion: False CCL activation occurred in a minority of patients, most of whom had ECG findings warranting emergent catheterization. The rate of false CCL activation appears acceptable. Background: Atrial fibrillation (AF) is the most common cardiac arrhythmia treated in the ED, leading to high rates of hospitalization and resource utilization. Dedicated atrial fibrillation clinics offer the possibility of reducing the admission burden for AF patients presenting to the ED. While the referral base for these AF clinics is growing, it is unclear to what extent these clinics contribute to reducing the number of ED visits and hospitalizations related to AF. Objectives: To compare the number of ED visits and hospitalizations among discharged ED patients with a primary diagnosis of AF who followed up with an AF clinic and those who did not. Methods: A retrospective cohort study and medical records review including three major tertiary centres in Calgary, Canada. A sample of 600 patients was taken representing 200 patients referred to the AF clinic from the Calgary Zone EDs and compared to 400 matched control ED patients who were referred to other providers for follow-up. The controls were matched for age and sex. Inclusion criteria included patients over 18 years of age, discharged during the index visit, and seen by the AF clinic between January 1, 2009 and October 25, 2010. Exclusion criteria included non-residents and patients hospitalized during the index visit. The number of cardiovascular-related ED visits and hospitalizations was measured. All data are categorical, and were compared using chi-square tests. Results: Patients in the control and AF clinic cohorts were similar for all baseline characteristics except for a higher proportion of first episode patients in the intervention arm. In the six months following the index ED visit, 55 study group patients (27.5%) visited an ED on 95 occasions, and 12 (6%) were hospitalized on 16 occasions. Of the control group, 122 patients (30.5%) visited an ED on 193 occasions, and 44 (11%) were hospitalized on 55 occasions. Using a chi-square test we found no significant difference in ED visits (p = 0.5063) or hospitalizations (p = 0.0664) between the control and AF clinic cohorts. Conclusion: Based on our results, referral from the ED to an AF clinic is not associated with a significant reduction in subsequent cardiovascular related ED visits and hospitalizations. Due to the possibility of residual confounding, randomized trials should be performed to evaluate the efficacy of AF clinics. reported an income of less than $10,000. There were no significant associations between sex, race, marital status, education level, income, insurance status, and subsequent 30-and-90 day readmission rates. HLA score was not found to be significantly related to readmission rates. The mean HLA score was 18.9 (sd = 7.87), equivalent to less than 6th grade literacy, meaning these patients may not be able to read prescription labels. For each unit increase in HFKT score, the odds of being readmitted within 30 days decreased by 0.219 (p < 0.001) and for 31-90 days decreased by 0.440 (p < 0.001). For each unit increase in SCBS score, the odds of being readmitted within 90 days decreased by 0.949 (p = 0.038). Conclusion: Health care literacy in our patient population is not associated with readmission, likely related to the low literacy rate of our study population. Better HF knowledge and self-care behaviors are associated with lower readmission rates. Greater emphasis should be placed on patient education and self-care behaviors regarding HF as a mechanism to decrease readmission rates. Comparison of Door to Balloon Times in Patients Presenting Directly or Transferred to a Regional Heart Center with STEMI Jennifer Ehlers, Adam V. Wurstle, Luis Gruberg, Adam J. Singer Stony Brook University, Stony Brook, NY Background: Based on the evidence, a door-to-balloon-TIME (DTBT) of less than 90 minutes is recommended by the AHA/ACC for patients with STEMI. In many regions, patients with STEMI are transferred to a regional heart center for percutaneous coronary intervention (PCI). Objectives: We compared DTBT for patients presenting directly to a regional heart center with those for patients transferred from other regional hospitals. We hypothesized that DTBT would be significantly longer for transferred patients. Methods: Study Design-Retrospective medical record review. Setting-Academic ED at a regional heart center with an annual census of 80,000 that includes a catchment area of 12 hospitals up to 50 miles away. Patients-Patients with acute STEMI identified on ED 12-lead ECG. Measures-Demographic and clinical data including time from triage to ECG, from ECG to activation of regional catheterization lab, and from initial triage to PCI (DTBT , and door to intravascular balloon deployment (D2B). Methods: The study was performed in an inner-city academic ED between 1/1/07 and 12/31/10. Every patient for whom ED activation of our STEMI system occurred was included. All times data from a pre-existing quality assurance database were collected prospectively. Patient language was determined retrospectively by chart review. Results: There were 132 patients between 1/1/07 and 12/31/10. 21 patients (16%) were deemed too sick or unable to provide history and were excluded, leaving 111 patients for analysis. 85 (77%) spoke English and 26 (23%) did not. In the non-English group, Chinese was the most common language, in 22 (20%) Background: Syncope is a common, potentially highrisk ED presentation. Hospitalization for syncope, although common, is rarely of benefit. No populationbased study has examined disparities in regional admission practices for syncope care in the ED. Moreover, there are no population-based studies reporting prognostic factors for 7-and 30-day readmission of syncope. Objectives: 1) To identify factors associated with admission as well as prognostic factors for 7-and 30-day readmission to these hospitals; 2) To evaluate variability in syncope admission practices across different sizes and types of hospitals. Methods: DESIGN -Multi-center retrospective cohort study using ED administrative data from 101 Albertan EDs. PARTICIPANTS/SUBJECTS -patients >17 years of age with syncope (ICD10: R55) as a primary or secondary diagnosis from 2007 to June 2011. Readmission was defined as return visits to the ED or admission <7 days or 7-30 days after the index visit (including against medical advice and left without being seen during the index visit). OUTCOMES -factors associated with hospital admission at index presentation, and readmission following ED discharge, adjusted using multivariable logistic regression. Results: Overall, 44521 syncope visits occurred over 4 years. Increased age, increased length of stay (LoS), performance of CXR, transport by ground ambulance, and treatment at a low-volume hospital (non-teaching or non-large urban) were independently associated with index hospitalization. These same factors, as well as hospital admission itself, were associated with 7-day readmission. Additionally, increased age, increased LoS, performance of a head CT, treatment at a low-volume hospital, hospital admission, and female sex were independently associated with 7-30 day readmission. Arrival by ground ambulance was associated with a decreased likelihood of both 7-and 7-30 day readmission. Conclusion: Our data identify variations in practice as well as factors associated with hospitalization and readmission for syncope. The disparity in admission and readmission rates between centers may highlight a gap in quality of care or reflect inappropriate use of resources. Further research to compare patient out-comes and quality of patient care among urban and non-urban centers is needed. Background: Change in dyspnea severity (DS) is a frequently used outcome measure in trials of acute heart failure (AHF). However, there is limited information concerning its validity. Objectives: To assess the predictive validity of change in dyspnea severity. Methods: This was a secondary analysis of a prospective observational study of a convenience sample of AHF patients presenting with dyspnea to the ED of an academic tertiary referral center with a mixed urban/ suburban catchment area. Patients were enrolled weekdays, June through December 2006. Patients assessed their DS using a 10-cm visual analog scale at three times: the start of ED treatment (baseline) as well as at 1 and 4 hours after starting ED treatment. The difference between baseline and 1 hour was the 1-hour DS change. The difference between baseline and 4 hours was the 4-hour DS change. Two clinical outcome measures were obtained: 1) the number of days hospitalized or dead within 30 days of the index visit (30-day outcome), and 2) the number of days hospitalized or dead within 90 days of the index visit (90-day outcome). Results: Data on 86 patients were analyzed. The median 30-day outcome variable was 6 days with an interquartile range (IQR) of 3 to 16. The median 90-day outcome variable was 10 days (IQR 4 to 27.5). The median 1-hour DS change was 2.6 cm (IQR 0.3 to 6.7). The median 4-hour DS change was 4.9 cm (IQR 2.2 to 8.2). The 30-day and 90-day mortality rates were 9% and 13% respectively. The spearman rank correlations and 95% confidence intervals are presented in the table below. Conclusion: While the point estimates for the correlations were below 0.5, the 95% CI for two of the correlations extended above 0.5. These pilot data support change in DS as a valid outcome measure for AHF when measured over 4 hours. A larger prospective study is needed to obtain a more accurate point estimate of the correlations. Background: The majority of volume-quality research has focused on surgical outcomes in the inpatient setting; very few studies have examined the effect of emergency department (ED) case volume on patient outcomes. Objectives: To determine whether ED case volume of acute heart failure (AHF) is associated with short-term patient outcomes. Methods: We analyzed the 2008 Nationwide Emergency Department Sample (NEDS) and Nationwide Inpatient Sample (NIS), the largest, all-payer, ED and inpatient databases in the US. ED visits for AHF were identified with a principal diagnosis of ICD-9-CM code 428.xx. EDs were categorized into quartiles by ED case volume of AHF. The outcome measures were early inpatient mortality (within the first 2 days of admission), overall inpatient mortality, and hospital length of stay (LOS). Results: There were an estimated 946,000 visits for AHF from approximately 4,700 EDs in 2008; 80% were hospitalized. Of these, the overall inpatient mortality rate was 3.2%, and the median hospital LOS was 4 days. Early inpatient mortality was lower in the highest-volume EDs, compared with the lowest-volume EDs (0.8% vs. 2.1%; P < 0.001). Similar patterns were observed for overall inpatient mortality (3.0% vs. 4.1%; P < 0.001). In a multivariable analysis adjusting for 37 patient and hospital characteristics, early inpatient mortality remained lower in patients admitted through the highest-volume EDs (adjusted odds ratios [OR], 0.70; 95% confidence interval [CI], 0.52-0.96), as compared with the lowest-volume EDs. There was a trend towards lower overall inpatient mortality in the highest-volume EDs; however, this was not statistically significant (adjusted OR, 0.92; 95%CI, 0.75-1.14). By contrast, using the NIS data including various sources of admissions, a higher case volume of inpatient AHF patients predicted lower overall inpatient mortality (adjusted OR, 0.51; 95%CI, 0.40-0.65). The hospital LOS in patients admitted through the highest-volume EDs was slightly longer (adjusted difference, 0.7 day; 95%CI, 0.2-1.2), compared with the lowest-volume EDs. Conclusion: ED patients who are hospitalized for AHF have an approximately 30% reduced early inpatient mortality if they were admitted from an ED that handles a large volume of AHF cases. The ''practice-makesperfect'' concept may hold in emergency management of AHF. Emergency Department Disposition and Charges for Heart Failure: Regional Variability Alan B. Storrow, Cathy A. Jenkins, Sean P. Collins, Karen P. Miller, Candace McNaughton, Naftilan Allen, Benjamin S. Heavrin Vanderbilt University, Nashville, TN Background: High inpatient admission rates for ED patients with acute heart failure are felt partially responsible for the large economic burden of this most costly cardiovascular problem. Objectives: We examined regional variability in ED disposition decisions and regional variability in total dollars spent on ED services for admitted patients with primary heart failure. Methods: The 2007 Nationwide Emergency Department Sample (NEDS) was used to perform a retrospective, cohort analysis of patients with heart failure (ICD-9 code of 428.x) listed as the primary ED diagnosis. Demographics and disposition percentages (with SE) were calculated for the overall sample and by region: Northeast, South, Midwest, and West. To account for the sample design and to obtain national and regional estimates, a weighted analysis was conducted. Results: There were 941,754 weighted ED visits with heart failure listed as the primary diagnosis. Overall, over eighty percent were admitted (see table) . Fifty-two percent of these patients were female; mean age was 72.7 years (SE 0.20). Hospitalization rates were higher in the Northeast (89.1%) and South (81.2%) than in the Midwest (76.0%) and West (74.8%). Total monies spent on ED services were highest in the South ($69,078,042) followed by the Northeast ($18,233,807), West ($6,360,315) and Midwest ($5,899,481) . Conclusion: This large retrospective ED cohort suggests a very high national admission rate with significant regional variation in both disposition decisions as well as total monies spent on ED services for patients with a primary diagnosis of heart failure. Examining these estimates and variations further may provide strategies to reduce the economic burden of heart failure. Background: Workplace violence in health care settings is a frequent occurrence. Gunfire in hospitals is of particular concern. However, information regarding such workplace violence is limited. Accordingly, we characterized U.S. hospital-based shootings from 2000-2010. Objectives: To determine extent of hospital-based shootings in the U.S. and involvement of emergency departments. Methods: Using LexisNexis, Google, Netscape, Pub-Med, and ScienceDirect, we searched reports for acute care hospital shooting events from January 2000 through December 2010, and those with at least one injured victim were analyzed. Results: We identified 140 hospital-related shootings (86 inside the hospital, 54 on hospital grounds), in 39 states, with 216 victims, of whom 98 were perpetrators. In comparison to external shootings, shootings within the hospital have not increased over time (see figure) . Perpetrators were from all age groups, including the elderly. Most of the events involved a determined shooter: grudge (26%), suicide (19%), ''euthanizing'' an ill relative (15%), and prisoner escape (12%). Ambient societal violence (8%) and mentally unstable patients (4%) were comparatively infrequent. The most common injured was the perpetrator (45%). Hospital employees comprised only 21% of victims; physician (3%) and nurse (5%) victims were relatively infrequent. The emergency department was the most common site (29%), followed by patient rooms (20%) and the parking lot (20%). In 13% of shootings within hospitals, the weapon was a security officer's gun grabbed by the perpetrator. ''Grudge'' motive was the only factor determinative of hospital staff victims (OR = 4.34, 95% CI 1.85-10.17). Conclusion: Although hospital-based shootings are relatively rare, emergency departments are the most likely site. The unpredictable nature of this type of event represents a significant challenge to hospital security and deterrence practices, as most perpetrators proved determined, and many hospital shootings occur outside the building. Impact of Emergency Physician Board Certification on Patient Perceptions of ED Care Quality Albert G. Sledge IV 1 , Carl A. Germann 1 , Tania D. Strout 1 , John Southall 2 1 Maine Medical Center, Portland, ME; 2 Mercy Hospital, Portland, ME Background: The Hospital Value-Based Purchasing Program mandated by the Affordable Care Act is the latest example of how patients' perceptions of care will affect the future practice environment of all physicians. The type of training of medical providers in the emergency department (ED) is one possible factor affecting patient perceptions of care. A unique situation in a Maine community ED led to the rapid transition from non-emergency medicine (EM) residency trained physicians to all EM residency trained and American Board of Emergency Medicine (ABEM) certified providers. Objectives: The purpose of this study was to evaluate the effect of the implementation of an all EM-trained, ABEM-certified physician staff on patient perceptions of the quality of care they received in the ED. Methods: We retrospectively evaluated Press Ganey data from surveys returned by patients receiving treatment in a single, rural ED. Survey items addressed patient's perceptions of physician courtesy, time spent listening, concern for patient comfort, and informativeness. Additional items evaluated overall perceptions of care and the likelihood that the respondent would recommend the ED to another. Data were compared for the three years prior to and following implementation of the all trained, certified staff. We used the independent samples t-test to compare mean responses during the two time periods. Bonferroni's correction was applied to adjust for multiple comparisons. Results: During the study period, 3,039 patients provided surveys for analysis: 1,666 during the pre-certification phase and 1,373 during the post-certification phase. Across all six survey items, mean responses increased following transition to the board-certified staff. These improvements were noted to be statistically significant in each case: courtesy p < 0.001, time listening p < 0.001, concern for comfort p < 0.001, informativeness p < 0.001, overall perception of care p < 0.001, and likelihood to recommend p < 0.001. Conclusion: Data from this community ED suggest that transition from a non-residency trained, ABEM certified staff to a fully trained and certified model has important implications for patient's perceptions of the care they receive. We observed significant improvement in rating scores provided by patients across all physicianoriented and general ED measures. Background: Transfer of care from the ED to the inpatient floor is a critical transition when miscommunication places patients at risk. The optimal form and content of handoff between providers has not been defined. In July 2011, ED-to-floor signout for all admissions to the medicine and cardiology floors was changed at our urban, academic, tertiary care hospital. Previously, signout was via an unstructured telephone conversation between ED resident and admitting housestaff. The new signout utilizes a web-based ED patient tracking system and includes: 1) a templated description of ED course is completed by the ED resident; 2) when a bed is assigned, an automated page is sent to the admitting housestaff; 3) ED clinical information, including imaging, labs, medications, and nursing interventions (figure) is reviewed by admitting housestaff; 4) if housestaff has specific questions about ED care, a telephone conversation between the ED resident and housestaff occurs; 5) if there are no specific questions, it is indicated electronically and the patient is transferred to the floor. Objectives: To describe the effects on patient safety (floor-to-ICU transfer in 24 hours) and ED throughput (ED length of stay (LOS) and time from bed assignment to ED departure) resulting from a change to an electronic, discussion-optional handoff system. Conclusion: Transition to a system in which signout of admitted patients is accomplished by accepting housestaff review of ED clinical information supplemented by verbal discussion when needed resulted in no significant change in rate of floor-to-ICU transfer or ED LOS and reduced time from bed assignment to ED departure. Background: Emergency physicians may be biased against patients presenting with nonspecific complaints or those requiring more extensive work-ups. This may result in patients being seen less quickly than those with more straightforward presentations, despite equal triage scores or potential for more dangerous conditions. Objectives: The goal of our study was to ascertain which patients, if any, were seen more quickly in the ED based on chief complaint. Methods: A retrospective report was generated from the EMR for all moderate acuity (ESI 3) adult patients who visited the ED from January 2005 through December 2010 at a large urban teaching hospital. The most common complaints were: abdominal pain, alcohol intoxication, back pain, chest pain, cough, dyspnea, dizziness, fall, fever, flank pain, headache, infection, pain (nonspecific), psychiatric evaluation, ''sent by MD,'' vaginal bleeding, vomiting, and weakness. Non-parametric independent sample tests assessed median time to be seen (TTBS) by a physician for each complaint. Differences in the TTBS between genders and based on age were also calculated. Chi-square testing compared percentages of patients in the ED per hour to assess for differences in the distribution of arrival times. Results: We obtained data from 116,194 patients. Patients with a chief complaint of weakness and dizziness waited the longest with a median time of 35 minutes and patients with flank pain waited the shortest with 24 minutes (p < 0.0001) ( Figure 1 ). Overall, males waited 30 minutes and females waited 32 minutes (p < 0.0001). Stratifying by gender and age, younger females between the ages of 18-50 waited significantly longer times when presenting with a chief complaint of abdominal pain (p < 0.0001), chest pain (p < 0.05), or flank pain (p < 0.0001) as compared to males in the same age group ( Figure 2 ). There was no difference in the distribution of arrival times for these complaints. Conclusion: While the absolute time differences are not large, there is a significant bias toward seeing young male patients more quickly than women or older males despite the lower likelihood of dangerous conditions. Triage systems should perhaps take age and gender better into account. Patients might benefit from efforts to educate EM physicians on the delays and potential quality issues associated with this bias in an attempt to move toward more egalitarian patient selection. Background: Detailed analysis of emergency department (ED) event data identified the time from completion of emergency physician evaluation (Doc Done) to the time patients leave the ED as a significant contributor to ED length of stay (LOS) and boarding at our institution. Process flow mapping identified the time from Doc Done to the time inpatient beds were ordered (BO) as an interval amendable to specific process improvements. Objectives: The purpose of this study was to evaluate the effect of ED holding orders for stable adult 3.6 (3.0 -4.1) 7.8 (6.9 -8.7) 15.2 (12.8 -17.6) 4.9 (2.8 -7.0) 17.3 (12.9 -21.7) 6.5 (4.5 -8.5) inpatient medicine (AIM) patients on: a) the time to BO and b) ED LOS. Methods: A prospective, observational design was used to evaluate the study questions. Data regarding the time to BO and LOS outcomes were collected before and after implementation of the ED holding orders program. The intervention targeted stable AIM patients being admitted to hospitalist, internal medicine, and family medicine services. ED holding orders were placed following the admission discussion with the accepting service and special attention was paid to proper bed type, completion of the emergent work-up and the expected immediate course of the patient's hospital stay. Holding orders were of limited duration and expired 4 hours after arrival to the inpatient unit. Results: During the 6-month study period, 7321 patients were eligible for the ED holding orders intervention; 6664 (91.0%) were cared for using the standard adult medicine order set and 657 (9.0%) received the intervention. The median time from Doc Done to BO was significantly shorter for patients in the ED holding orders group, 41 min (IQR 19, 88) vs. 95 min (IQR 53, 154) for the standard adult medicine group, p < 0.001. Similarly, the median ED LOS was significantly shorter for those in the ED holding orders group, 413 min (IQR 331, 540) vs. 456 min (IQR 346, 581) for the standard adult medicine group, p < 0.001. No lapses in patient care were reported in the intervention group. Conclusion: In this cohort of ED patients being admitted to an AIM service, placing ED holding orders rather than waiting for a traditional inpatient team evaluation and set of admission orders significantly reduced the time from the completion of the ED workup to placement of a BO. As a result, ED LOS was also significantly shortened. While overall utilization of the intervention was low, it improved with each month. Emergency Department Interruptions in the Age of Electronic Health Records Matthew Albrecht, John Shabosky, Jonathan de la Cruz Southern Illinois University School of Medicine, Springfield, IL Background: Interruptions of clinical care in the emergency department (ED) have been correlated with increased medical errors and decreased patient satisfaction. Studies have also shown that most interruptions happen during physician documentation. With the advent of the electronic health record and computerized documentation, ED physicians now spend much of their clinical time in front of computers and are more susceptible to interruptions. Voice recognition dictation adjuncts to computerized charting boast increased provider efficiency; however, little is known about how data input of computerized documentation affects physician interruptions. Objectives: We present here observational interruptions data comparing two separate ED sites, one that uses computerized charting by conventional techniques and one assisted by voice recognition dictation technology. Methods: A prospective observational quality initiative was conducted at two teaching hospital EDs located less than 1 mile from each other. One site primarily uses conventional computerized charting while the other uses voice recognition dictation computerized charting. Four trained observers followed ED physicians for 180 minutes during shifts. The tasks each ED physician performed were noted and logged in 30 second intervals. Tasks listed were selected from a predetermined standardized list presented at observer training. Tasks were also noted as either completed or placed in queue after a change in task occurred. A total of 4140 minutes were logged. Interruptions were noted when a change in task occurred with the previous task being placed in queue. Data were then compared between sites. Results: ED physicians averaged 5.33 interruptions/ hour with conventional computerized charting compared to 3.47 interruptions/hour with assisted voice recognition dictation (p = 0.0165). Conclusion: Computerized charting assisted with voice recognition dictation significantly decreased total per hour interruptions when compared to conventional techniques. Charting with voice recognition dictation has the potential to decrease interruptions in the ED allowing for more efficient workflow and improved patient care. Background: Using robot assistants in health care is an emerging strategy to improve efficiency and quality of care while optimizing the use of human work hours. Robot prototypes capable of performing vital signs and assisting with ED triage are under development. However, ED users' attitudes toward robot assistants are not well studied. Understanding of these attitudes is essential to design user-friendly robots and to prepare EDs for the implementation of robot assistants. Objectives: To evaluate the attitudes of ED patients and their accompanying family and friends toward the potential use of robot assistants in the ED. Methods: We surveyed a convenience sample of adult ED patients and their accompanying adult family members and friends at a single, university-affiliated ED, 9/ 26/11-10/27/11. The survey consisted of eight items from the Negative Attitudes Towards Robots Scale (Normura et al.) modified to address robot use in the ED. Response options included a 5-point Likert scale. A summary score was calculated by summing the responses for all 8 items, with a potential range of 8 (completely negative attitude) to 40 (completely positive attitude). Research assistants gave the written surveys to subjects during their ED visit. Internal consistency was assessed using Cronbach's alpha. Bivariate analyses were performed to evaluate the association between the summary score and the following variables: participant type (patient or visitor), sex, race, time of day, and day of week. Results: Of 121 potential subjects approached, 113 (93%) completed the survey. Participants were 37% patients, 63% family members or friends, 62% women, 79% white, and had a median age of 45.5 years (IQR 18-84). Cronbach's alpha was 0.94. The mean summary score was 22.2 (SD = 0.87), indicating subjects were between ''occasionally'' and ''sometimes'' comfortable with the idea of ED robot assistants (see table) . Men were more positive toward robot use than women (summary score: 24.6 vs 20.8; p = 0.033). No differences in the summary score were detected based on participant type, race, time of day, or day of week. Conclusion: ED users reported significant apprehension about the potential use of robot assistants in the ED. Future research is needed to explore how robot designs and strategies to implement ED robots can help alleviate this apprehension. Background: Emergency department cardioversion (EDC) of recent-onset atrial fibrillation or flutter (AF) patients is an increasingly common management approach to this arrhythmia. Patients who qualify for EDC generally have few co-morbidities and are often discharged directly from the ED. This results in a shift towards a sicker population of patients admitted to the hospital with this diagnosis. Objectives: To determine whether hospital charges and length of stay (LOS) profiles are affected by emergency department discharge of AF patients. Methods: Patients receiving treatment at an urban teaching community hospital with a primary diagnosis of atrial fibrillation or flutter were identified through the hospital's billing data base. Information collected on each patient included date of service, patient status, length of stay, and total charges. Patient status was categorized as inpatient (admitted to the hospital), observation (transferred from the ED to an inpatient bed but placed in an observation status), or ED (discharged directly from the ED). The hospital billing system automatically defaults to a length of stay of 0 for observation patients. ED patients were assigned a length of stay of 0. Total hospital charges and mean LOS were determined for two different models: a standard model (SM) in which patients discharged from the ED were excluded from hospital statistics, and an inclusive model (IM) in which discharged ED patients were included in the hospital statistics. Statistical analysis was through ANOVA. Results: A total of 317 patients were evaluated for AF over an 18-month period. Of these, 197 (62%) were admitted, 22 (7%) were placed in observation status, and 98 (31%) were discharged from the ED. Hospital charges and LOS in days are summarized in the table. All differences were statistically significant at (p < 0.001). Conclusion: Emergency department management can lead to a population of AF patients discharged directly from the ED. Exclusion of these patients from hospital statistics skews performance profiles effectively punishing institutions for progressive care. Background: Recent health care reform has placed an emphasis on the electronic health record (EHR). With the advent of the EHR it is common to see ED providers spending more time in front of computers documenting and away from patients. Finding strategies to decrease provider interaction with computers and increase time with patients may lead to improved patient outcomes and satisfaction. Computerized charting adjuncts, such as voice recognition software, have been marketed as ways to improve provider efficiency and patient contact. Objectives: We present here observational data comparing two separate ED sites, one where computerized charting is done by conventional techniques and one that is assisted with voice recognition dictation, and their effects on physican charting and patient contact. Methods: A prospective observational quality initiative was conducted at two teaching hospitals located less than 1 mile from each other. One site primarily uses conventional computerized charting while the other uses voice recognition dictation. Four trained quality assistants observed ED physicians for 180 minutes during shifts. The tasks each physician performed were noted and logged in 30 second intervals. Tasks listed were identified from a predetermined standardized list presented at observer training. A total of 4140 minutes were logged. Time allocated to charting and that allocated to direct patient care were then compared between sites. Results: ED physicians spent 28.6% of their time charting using conventional techniques vs 25.7% using voice recognition dictation (p = 0.4349). Time allocated to direct patient care was found to be 22.8% with conventional charting vs 25.1% using dictation (p = 4887). In total, ED physicians using conventional charting techniques spent 668/2340 minutes charting. ED physicians using voice recognition dictation spent 333/1800 minutes dictating and an additional 129.5/1800 minutes reviewing or correcting their dictations. The use of voice recognition assisted dictation rather than conventional techniques did not significantly change the amount of time physicians spent charting or with direct patient care. Although voice recognition dictation decreased initial input time of documenting data, a considerable amount of time was required to review and correct these dictations. Objectives: For our primary objective, we studied whether emergency department triage temperatures detected fever adequately when compared to a rectal temperature. As secondary objectives, we examined the temperature differences when a rectal temperature was taken within an hour of non-invasive temperature, temperature site (oral, axillary, temporal), and also examined the patients that were initially afebrile but were found to be febrile by rectal temperature. Methods: We performed an electronic chart review at our inner city, academic emergency department with an annual census of 110,000 patients. We identified all patients over the age of 18 who received a non-invasive triage temperature and a subsequent rectal temperature while in the ED from January 2002 through February 2011. Specific data elements included many aspects of the patient's medical record (e.g. subject demographics, temperature, and source). We analyzed our data with standard descriptive statistics, t-tests for continuous variables, and Pearson chi-square tests for proportions. Results: A total of 27,130 patients met our inclusion criteria. The mean difference in temperatures between the initial temperature and the rectal temperature was 1.3°F, with 25.9% having higher rectal temperatures ‡2°F, and 5.0% having higher rectal temperatures ‡4°F. The mean temperature difference among the 10,313 patients who an initial noninvasive temperature and a rectal temperature within one hour was 1.4°F. The mean difference among patients that received oral, axillary, and temporal temperatures was 1.2°F, 1.8°F, and 1.2°F respectively. Approximately one in five patients (18.1%) were initially afebrile and found to be febrile by rectal temperature, with an average temperature difference of 2.5°F. These patients had a higher rate of admission, and were more likely to be admitted to the intensive care unit. Conclusion: There are significant differences between rectal temperatures and non-invasive triage temperatures in this emergency department cohort. In almost one in five patients, fever was missed by triage temperature. Background: Pediatric emergency department (PED) overcrowding has become a national crisis, and has resulted in delays in treatment, and patients leaving without being seen. Increased wait times have also been associated with decreased patient satisfaction. Optimizing PED throughput is one means by which to handle the increased demands for services. Various strategies have been proposed to increase efficiency and reduce length of stay (LOS). Objectives: To measure the effect of direct bedding, bedside registration, and patient pooling on PED wait times, length of stay, and patient satisfaction. Methods: Data were extracted from a computerized ED tracking system in an urban tertiary care PED. Comparisons were made between metrics for 2010 (23,681 patients) and the 3 months following process change (6,195 patients). During 2010, patients were triaged by one or two nurses, registered, and then sent either to a 14-bed PED or a physically separate 5-bed fast-track unit, where they were seen by a physician. Following process change, patients were brought directly to a bed in the 14-bed PED, triaged and registered, then seen by a physician. The fast-track unit was only utilized to accommodate patient surges. Results: Anticipating improved efficiencies, attending physician coverage was decreased by 9%. After instituting process changes, improvements were noted immediately. Although daily patient volume increased by 3%, median time to be seen by a physician decreased by 20%. Additionally, median LOS for discharged patients decreased by 15%, and median time until the decisionto-admit decreased by 10%. Press-Ganey satisfaction scores during this time increased by greater than 5 mean score points, which was reported to be a statistically significant increase. Conclusion: Direct bedding, bedside registration, and patient pooling were simple to implement process changes. These changes resulted in more efficient PED throughput, as evidenced by decreased times to be seen by a physician, LOS for discharged patients, and time until decision-to-admit. Additionally, patient satisfaction scores improved, despite decreased attending physician coverage and a 30% decrease in room utilization. ) . During period 1, the OU was managed by the internal medicine department and staffed by primary care physicians and physician assistants. During periods 2 and 3, the OU was managed and staffed by EM physicians. Data collected included OU patient volume, length of stay (LOS) for discharged and admitted patients, admission rates, and 30-day readmission rates for discharged patients. Cost data collected included direct, indirect, and total cost per patient encounter. Data were compared using chi-square and ANOVA analysis followed by multiple pairwise comparisons using the Bonferroni method of p-value adjustment. Results: See table. The OU patient volume and percent of ED volume was greater in period 3 compared to periods 1 and 2. Length of stay, admission rates, 30-day readmission rates, and costs were greater in period 1 compared to periods 2 and 3. Conclusion: EM physicians provide more cost-effective care for patients in this large OU compared to non-EM physicians, resulting in shorter LOS for admitted and discharged patients, greater rates of patients discharged, and less 30-day readmission rates for discharged patients. This is not affected by an increase in OU volume and shows a trend towards improvement. Background: Emergency department (ED) crowding continues to be a problem, and new intake models may represent part of the solution. However, little data exist on the sustainability and long-term effects of physician triage and screening on standard ED performance metrics, as most studies are short-term. Objectives: We examined the hypothesis that a physician screening program (START) sustainably improves standard ED performance metrics including patient length of stay (LOS) and patients who left without completing assessment (LWCA). We also investigated the number of patients treated and dispositioned by START without using a monitored bed and the median patient door-to-room time. Methods: Design and Setting: This study is a retrospective before-and-after analysis of START in a Level I tertiary care urban academic medical center with approximately 90,000 annual patient visits. All adult patients from December 2006 until November 2010 are included, though only a subset was seen in START. START began at our institution in December 2007. Observations: Our outcome measures were length of stay for ED patients, LWCA rates, patients treated and dispositioned by START without using a monitored bed, and door-to-room time. Statistics: Simple descriptive statistics were used. P-values for LOS were calculated with Wilcoxon test and p-value for LWCA was calculated with chi-square. Results: Table 2 shows median length of stay for ED patients was reduced by 56 minutes/patient (p-value <0.0001) when comparing the most recent year to the year before START. Patients who LWCA were reduced from 4.8% to 2.9% (p-value <0.0001) during the same time period. We also found that in the first half-year of START, 18% of patients screened in the ED were treated and dispositioned without using a monitored bed and by the end of year 3, this number had grown to 29%. Median door-to-room time decreased from 18.4 minutes to 9.9 minutes over the same period of time. Conclusion: A START system can provide sustained improvements in ED performance metrics, including a significant reduction in ED LOS, LWCA rate, and doorto-room time. Additionally, START can decrease the need for monitored ED beds and thus increase ED capacity. . Labs were obtained in 98%, CT in 37%, US in 30%, and consultation in 23%. 18% of the cohort was admitted to the hospital. The most commonly utilized source of translation was a layman (35%). A professional translator was used in 9% and translation service (language line, MARTY) in 30%. The examiner was fluent in the patient's language in 11%. Both the patient and examiner were able to maintain basic communication in 11%. There were 47 patients in the professional/ fluent translation group and 44 patients in the lay translation group. There was no difference in ED LOS between groups 288 vs 304 min; p = 0.6. There was no difference in the frequency of lab tests, computerized tomography, ultrasound, consultations, or hospital admission. Frequencies did not differ by sex or age. Conclusion: Translation method was not associated with a difference in overall ED LOS, ancillary test use, or specialist consultation in Spanish-speaking patients presenting to the ED for abdominal pain. Emergency Department Patients on Warfarin -How Often Is the Visit Due to the Medication? Jim Killeen, Edward Castillo, Theodore Chan, Gary Vilke UCSD Medical Center, San Diego, CA Background: Warfarin has important therapeutic value for many patients, but has been associated with signi-ficant bleeding complications, hypersensitivity reactions, and drug-drug interactions, which can result in patients seeking care in the emergency department (ED). Objectives: To determine how often ED patients on warfarin present for care as a result of the medication itself. Methods: A multi-center prospective survey study in two academic EDs over 6 months. Patients who presented to the ED taking warfarin were identified, and ED providers were prospectively queried at the time of disposition regarding whether the visit was the result of a complication or side effect associated with warfarin. Data were also collected on patient demographics, chief complaint, triage acuity, vital signs, disposition, ED evaluation time, and length of stay (LOS). Patients identified with a warfarin-related cause for their ED visit were compared with those who were not. Statistical analysis was performed using descriptive statistics. Results: During the study period, 31,500 patients were cared for by ED staff, of whom 594 were identified as taking warfarin as part of their medication regimen. Of these, providers identified 54.7% (325 patients) who presented with a warfarin-related complication as their primary reason for the ED visit. 56.9% (338) Each 100 hours of daily boarding is associated with a drop of 1.3 raw score points in both PG metrics. These seemingly small drops in raw scores translate into major changes in rankings on Press Ganey national percentile scales (a difference of as much as 10 percentile points). Our institution commonly has hundreds of hours of daily boarding. It is possible that patient-level measurements of boarding impact would show stronger correlation with individual satisfaction scores, as opposed to the daily aggregate measures we describe here. Our research suggests that reducing the burden of boarding on EDs will improve patient satisfaction. Background: Prolonged emergency department (ED) boarding is a key contributor to ED crowding. The effect of output interventions (moving boarders out of the ED into an intermediate area prior to admission or adding additional capacity to an observation unit) has not been well studied. Objectives: We studied the effect of a combined observation-transition (OT) unit, consisting of observation beds and an interim holding area for boarding ED patients, on the length of stay (LOS) for admitted patients, as well as secondary outcomes such as LOS for discharged patients, and left without being seen rates. Methods: We conducted a retrospective review (12 months pre-, 12 months post-design) of an OT unit at an urban teaching ED with 59,000 annual visits (study ED). We compared outcomes to a nearby communitybased ED with 38,000 annual visits in the same health system (control ED) where no capacity interventions were performed. The OT had 17 beds, full monitoring capacity, and was staffed 24 hours per day. The number of beds allocated to transition and observation patients fluctuated throughout the course of the intervention, based on patient demands. All analyses were conducted at the level of the ED-day. Wilcoxon rank-sum and analysis of covariance tests were used for comparisons; continuous variables were summarized with medians. Results: In unadjusted analyses, median daily LOS of admitted patients at the study ED was 31 minutes lower in the 12 months after the OT opened, 6.98 to 6.47 hours (p < 0.0001). Control site daily LOS for admitted patients increased 26 minutes from 4.52 to 4.95 hours (p < 0.0001). Results were similar after adjusting for other covariates (day of week, ED volume, and triage level). LOS of discharged patients at study ED decreased by 14 minutes, from 4.1 hours to 3.8 hours (p < 0.001), while the control ED saw no significant changes in discharged patient LOS (2.6 hours to 2.7 hours, p = 0.06). Left without being seen rates did not decrease at either site. Conclusion: Opening an OT unit was associated with a 30-minute reduction in average daily ED LOS for admitted patients and discharged patients in the study ED. Given the large expense of opening an OT, future studies should compare capacity-dependent (e.g., OT) vs. capacity-independent (e.g, organizational) interventions to reduce ED crowding. Fran Balamuth, Katie Hayes, Cynthia Mollen, Monika Goyal Children's Hospital of Philadelphia, Philadelphia, PA Background: Lower abdominal pain and genitourinary problems are common chief complaints in adolescent females presenting to emergency departments. Pelvic inflammatory disease (PID) is a potentially severe complication of lower genital tract infections, which involves inflammation of the female upper genital tract secondary to ascending STIs. PID has been associated with severe sequelae including infertility, ectopic pregnancy, and chronic pelvic pain. We describe the prevalence and microbial patterns of PID in a cohort of adolescent females presenting to an urban emergency department with abdominal or genitourinary complaints. Objectives: To describe the prevalence and microbial patterns of PID in a cohort of adolescent patients presenting to an ED with lower abdominal or genitourinary complaints. Methods: This is a secondary analysis of a prospective study of females ages 14-19 years presenting to a pediatric ED with lower abdominal or genitourinary complaints. Diagnosis of PID was per 2006 CDC guidelines. Patients underwent Chlamydia trachomatis (CT) and Neisseria gonorrhea (GC) testing via urine APTIMA Combo 2 Assay and Trichomonas vaginalis (TV) testing using the vaginal OSOM Trichomonas rapid test. Descriptive statistics were performed using STATA 11.0. Results: The prevalence of PID in this cohort of 328 patients was 19.5% (95% CI 15.2%, 23.8%), 37.5% (95% CI 25.3%, 49.7%) of whom had positive sexually transmitted infection (STI) testing: 25% (95% CI 14.1%, 35.9%) with CT, 7.8% (95% CI 1.1, 14.6%) with GC, and 12.5% (95% CI 4.2%, 20.8%) with TV. 84.4% (95% CI 75.2, 93.5%) of patients diagnosed with PID received antibiotics consistent with CDC recommendations. Patients with lower abdominal pain as their chief complaint were more likely to have PID than patients with genitourinary complaints (OR 3.3, 95% CI 1.7, 6.4). Conclusion: A substantial number of adolescent females presenting to the emergency department with lower abdominal pain were diagnosed with PID, with microbial patterns similar to those previously reported in largely adult, outpatient samples. Furthermore, appropriate treatment for PID was observed in the majority of patients diagnosed with PID. Impact Background: In resource-poor settings, maternal health care facilities are often underutilized, contributing to high maternal mortality. The effect of ultrasound in these settings on patients, health care providers, and communities is poorly understood. Objectives: The purpose of this study was to assess the effect of the introduction of maternal ultrasound in a population not previously exposed to this intervention. Methods: An NGO-led program trained nurses at four remote clinics outside Koutiala, Mali, who performed 8,339 maternal ultrasound scans over three years. Our researchers conducted an independent assessment of this program, which involved log book review, sonographer skill assessment, referral follow-up, semi-structured interviews of clinic staff and patients, and focus groups of community members in surrounding villages. Analyses included the effect of ultrasound on clinic function, job satisfaction, community utilization of prenatal care and maternity services, alterations in clinical decision making, sonographer skill, and referral frequency. We used QRS NVivo9 to organize qualitative findings, code data, and identify emergent themes, and GraphPad software (La Jolla, CA) and Microsoft Excel to tabulate quantitative findings Results: -Findings that triggered changes in clinical practice were noted in 10.1% of ultrasounds, with a 3.5% referral rate to comprehensive maternity care facilities. -Skill retention and job satisfaction for ultrasound providers was high. -The number of patients coming for antenatal care increased, after introduction of ultrasound, in an area where the birth rate has been decreasing. -Over time, women traveled from farther distances to access ultrasound and participate in antenatal care. -Very high acceptance among staff, patients and community members. -Ultrasound was perceived as most useful for finding fetal position, sex, due date, and well-being. -Improved confidence in diagnosis and treatment plan for all cohorts. -Improved compliance with referral recommendations. -No evidence of gender selection motivation for ultrasound use. Conclusion: Use of maternal ultrasound in rural and resource-limited settings draws women to an initial antenatal care visit, increases referral, and improves job satisfaction among health care workers. Methods: A retrospective database analysis was conducted using the electronic medical record from a single, large academic hospital. ED patients who received a billing diagnosis of ''nausea and vomiting of pregnancy'' or ''hyperemesis gravidarum'' between 1/1/10 and 12/31/10 were selected. A manual chart review was conducted with demographic and treatment variables collected. Statistical significance was determined using multiple regression analysis for a primary outcome of return visit to the emergency department for nausea and vomiting of pregnancy. Results: 113 patients were identified. The mean age was 27.1 years (SD±5.25), mean gravidity 2.90 (SD±1.94), and mean gestational age 8.78 weeks (SD±3.21). The average length of ED evaluation was 730 min (SD±513). Of the 113 patients, 38 (33.6%) had a return ED visit for nausea and vomiting of pregnancy, 17 (15%) were admitted to the hospital, and 49 (43%) were admitted to the ED observation protocol. Multiple regression analysis showed that the presence of medical co-morbidity (p = 0.039), patient gravditity (p = 0.016), gestational age (p = 0.038), and admission to the hospital (p = 0.004) had small but significant effects on the primary outcome (return visits to the emergency department). No other variables were found to be predictive of return visits to the ED including admission to the ED observation unit or factors classically thought to be associated with severe forms of nausea and vomiting in pregnancy including ketonuria, electrolyte abnormalities, or vital sign abnormalities. Conclusion: Nausea and vomiting in pregnancy has a high rate of return ED visits that can be predicted by young patient age, low patient gravidity, early gestational age, and the presence of other comorbidities. These patients may benefit from obstetric consultation and/or optimization of symptom management after discharge in order to prevent recurrent utilization of the ED. Prevalence Conclusion: There is a high prevalence of HT in adult SA victims. Although our study design and data do not allow us to make any inferences regarding causation, this first report of HT ED prevalence suggests the opportunity to clarify this relationship and the potential opportunity to intervene. Background: Sexually transmitted infections (STI) are a significant public health problem. Because of the risks associated with STIs including PID, ectopic pregnancy, and infertility the CDC recommends aggressive treatment with antibiotics in any patient with a suspected STI. Objectives: To determine the rates of positive gonorrhea and chlamydia (G/C) screening and rates of empiric antibiotic use among patients of an urban academic ED with >55,000 visits in Boston, MA. Methods: A retrospective study of all patients who had G/C cultures in the ED over 12 months. Chi-square was used in data analysis. Sensitivity and specificity were also calculated. Results: A positive rate of 9/712 (1.2%) was seen for gonorrhea and 26/714 (3.6%) for chlamydia. Females had positive rates of 2/602 (0.3%) and 17/603 (2.8%) respectively. Males had higher rates of 7/110 (6.4%) (p =< 0.001) and 9/111 (8.1%) (p = 0.006). 284 patients with G/C sent received an alternative diagnosis, the most common being UTI (63), ovarian pathology (35), vaginal bleeding (34), and vaginal candidiasis (33); 4 were excluded. This left 426 without definitive diagnosis. Of these, 24.2% (87/360) of females were treated empirically with antibiotics for G/C, and a greater percentage of males (66%, 45/66) were treated empirically (p < 0.001). Of those empirically treated, 109/132 (82.6%) had negative cultures. Meanwhile 9/32 (28.1%) who ultimately had positive cultures were not treated with antibiotics during their ED stay. Sensitivity of the provider to predict presence of disease based on decision to give empiric antibiotics was 71.9 (CI 53.0-85.6). Specificity was 72.3 (CI 67.6-76.6). Conclusion: Most patients screened in our ED for G/C did not have positive cultures and 82.6% of those treated empirically were found not to have G/C. While early treatment is important to prevent complications, there are risks associated with antibiotic use such as allergic reaction, C difficile infection, and development of antibiotic resistance. Our results suggest that at our institution we may be over-treating for G/C. Furthermore, despite high rates of treatment, 28% of patients who ultimately had positive cultures did not receive antibiotics during their ED stay. Further research into predictive factors or development of a clinical decision rule may be useful to help determine which patients are best treated empirically with antibiotics for presumed G/C. Background: Air travel may be associated with unmeasured neurophysiological changes in an injured brain that may affect post-concussion recovery. No study has compared the effect of commercial airtravel on concussion injuries despite rather obvious decreased oxygen tension and increased dehydration effect on acute mTBI. Objectives: To determine if air travel within 4-6 hours of concussion is associated with increased recovery time in professional football and hockey players. Methods: Prospective cohort study of all active-roster National Football League and National Hockey League players during the 2010-2011 seasons. Internet website review of league sties for injury identification of concussive injury and when player returned to play solely for mTBI. Team schedules and flight times were also confirmed to include only players who flew immediately following game (within 4-6 hr). Multiple injuries were excluded as were players who had injury around all-star break for NHL and scheduled off week in NFL. Results: During the 2010-2011 NFL and NHL seasons, 122 (7.2%) and 101 (13.0%) players experienced a concussion (percent of total players), in the respective leagues. Of these, 68 NFL players (57%) and 39 NHL players (39%) flew within 6 hours of the incident injury. The mean distance flown was shorter for NFL (850 miles, SD 576 vs. NHL 1060, SD 579) miles and all were in a pressurized cabin. The mean number of games missed for NFL and NHL players who traveled by air immediately after concussion was increased by 29% and 24% (respectively) than for those who did not travel by air NFL: 3.8 (SD 2.2) vs. 2.6 games (SD 1.8) and NHL: 16.2 games (SD 22.0) vs.12.4 (SD 18.6); p < 0.03. Conclusion: This is an initial report of an increased rate of recovery in terms of more games missed, for professional athletes flying commercial airlines post-mTBI compared to those that do not subject their recently injured brains to pressurized airflight. The obvious changes of decreased oxygen tension with altitude equivalent of 7,500 feet, decreased humidity with increased dehydration, and duress of travel accompanying pressurized airline cabins all likely increase the concussion penumbra in acute mTBI. Early air travel post concussion should be further evaluated and likely postponed 48-72 hr. until initial symptoms subside. Background: Previous studies have shown better in-hospital stroke time targets for those who arrive by ambulance compared to other modes of transport. However, regional studies report that less than half of stroke patients arrive by ambulance. Objectives: Our objectives were to describe the proportion of stroke patients who arrive by ambulance nationwide, and to examine regional differences and factors associated with the mode of transport to the emergency department (ED). Methods: This is a cross-sectional study of all patients with a primary discharge diagnosis of stroke based on previously validated ICD-9 codes abstracted from the National Hospital Ambulatory Medical Care Survey for 2007-2009. We excluded subjects <18 years of age and those with missing data. The study related survey variables included patient demographics, community characteristics, mode of transport to the hospital, and hospital characteristics. Results: 566 patients met inclusion criteria, representing 2,153,234 patient records nationally. Of these, 50.4% arrived by ambulance. After adjustment for potential confounders, patients residing in the west and south had lower odds of arriving by ambulance for stroke when compared to northeast (Southern Region, OR 0.45, 95% CI 0.26-0.76, Western Region, OR 0.45, 95% CI 0.25-0.84, Midwest Region, OR 0.56, 95% CI 0.31-1.01). Compared to the Medicare population, privately insured and self insured had lower odds of arriving by ambulance (OR for private insurance 0.48, 95% CI 0.28-0.84 and OR for self payers 0.36, 95% CI 0.14-0.93). Age, sex, race, urban or rural location of ED, or safety net status were not independently associated with ambulance use. Conclusion: Patients with stroke arrive by ambulance more frequently in the northeast than in other regions of the US. Identifying reasons for this regional difference may be useful in improving ambulance utilization and overall stroke care nationwide. Objectives: We sought to determine whether there was a difference in type of stroke presentation based upon race. We further sought to determine whether there is an increase in hemorrhagic strokes among Asian patients with limited English proficiency. Methods: We performed a retrospective chart review of all stroke patients age 18 and older for 1 year of patients that were diagnosed with cerebral vascular accident (CVA) or intracranial hemorrhage (ICH). We collected data on patient demographics, and past medical history. We then stratified patients according to race (white, black, Latino, Asian, and other). We classified strokes as ischemic, intracranial hemorrhage (ICH), subarachnoid hemorrhage (SAH), subdural hemorrhage (SDH), and other (e.g., bleeding into metatstatic lesions). We used only the index visit. We present the data percentages, medians and interquartile ranges (IQR). We tested the association of the outcome of intracranial hemorrhage against demographic and clinical variables using chi-square and Kruskal-Wallis tests. We performed a logistic regression model to determine factors related to presentation with an intracranial hemorrhage (ICH Background: The practice of obtaining laboratory studies and routine CT scan of the brain on every child with a seizure has been called into question in the patient who is alert, interactive, and back to functional baseline. There is still no standard practice for the management of non-febrile seizure patients in the pediatric emergency department (PED). Objectives: We sought to determine the proportion of patients in whom clinically significant laboratory studies and CT scans of the brain were obtained in children who presented to the PED with a first or recurrent non-febrile seizure. We hypothesize that the majority of these children do not have clinically significant laboratory or imaging studies. If clinically significant values were found, the history given would warrant further laboratory and imaging assessment despite seizure alone. Methods: We performed a retrospective chart review of 93 patients with first-time or recurrent non-febrile seizures at an urban, academic PED between July 2007 to June 2011. Exclusion criteria included children who presented to the PED with a fever and age less than 2 months. We looked at specific values that included a complete blood count, basic metabolic panel, and liver function tests, and if the child was on antiepileptics along with a level for a known seizure disorder, and CT scan. Abnormal laboratory and CT scan findings were classified as clinically significant or not. Results: The median age of our study population is 4 years with male to female ratio of 1.7. 70% of patients had a generalized tonic-clonic seizure. Laboratory studies and CT scans were obtained in 87% and 35% of patients, respectively. Five patients had clinically significant abnormal labs; however, one had ESRD, one developed urosepsis, one had eclampsia, and two others had hyponatremia, which was secondary to diluted formula and trileptal toxicity. Three children had an abnormal head CT: two had a VP shunt and one had a chromosomal abnormality with developmental delay. Conclusion: The majority of the children analyzed did not have clinically significant laboratory or imaging studies in the setting of a first or recurrent non-febrile seizure. Of those with clinically significant results, the patient's history suggested a possible etiology for their seizure presentation and further workup was indicated. Background: In patients with a negative CT scan for suspected subarachnoid hemorrhage (SAH), CT angiography (CTA) has emerged as a controversial alternative diagnostic strategy in place of lumbar puncture (LP). Objectives: To determine the diagnostic accuracy for SAH and aneurysm of LP alone, CTA alone, and LP followed by CTA if the LP is positive. Methods: We developed a decision and Bayesian analysis to evaluate 1) LP, 2) CTA, and 3) LP followed by CTA if the LP is positive. Data were obtained from the literature. The model considers probability of SAH (15%), aneurysm (85% if SAH), sensitivity and specificity of CT (92.9% and 100% overall), of LP (based on RBC and xanthochromia), and of CTA, traumatic tap and its influence on SAH detection. Analyses considered all patients and those presenting at less than 6 hours or greater than 6 hours from symptom onset by varying the sensitivity and specificity of CT and CTA. Results: Using the reported ranges of CT scan sensitivity and the specificity, the revised likelihood of SAH following a negative CT ranged from 0.5-3.7%, and the likelihood of aneurysm ranged from 2.3-5.4%. Following any of the diagnostic strategies, the likelihood of missing SAH ranged from 0-0.7%. Either LP strategy diagnosed 99.8% of SAHs versus 83-84% with CTA alone because CTA only detected SAH in the presence of an aneurysm. False positive SAH with LP ranged from 8.5-8.8% due to traumatic taps and with CTA ranged from 0.2-6.0% due to aneurysms without SAH. The positive predictive value for SAH ranged from 5.7-30% with LP and from 7.9-63% with CTA. For patients presenting within 6 hours of symptom onset, the revised likelihood of SAH following a negative CT became 0.53%, and the likelihood of aneurysm ranged from 2.3-2.7%. Following any of the diagnostic strategies, the likelihood of missing SAH ranged from 0.01-0.095%. Either LP strategy diagnosed 99.8% of SAH versus 83-84% with CTA alone. False positive SAH with LP was 8.8% and with CTA ranged from 0.2-5.1%. The positive predictive value for SAH was 5.7% with LP and from 7.9-63% with CTA. CTA following a positive LP diagnosed 8.5-24% of aneurysms. Conclusion: LP strategies are more sensitive for detecting SAH but less specific than CTA because of traumatic taps, leading to lower predictive value positives for SAH with LP than with CTA. Either diagnostic strategy results in a low likelihood of missing SAH, particularly within 6 hours of symptom onset. Background: Recent studies support perfusion imaging as a prognostic tool in ischemic stroke, but little data exist regarding its utility in transient ischemic attack (TIA). CT perfusion (CTP), which is more available and less costly to perform than MRI, has not been well studied. Objectives: To characterize CTP findings in TIA patients, and identify imaging predictors of outcome. Methods: This retrospective cohort study evaluated TIA patients at a single ED over 15 months, who had CTP at initial evaluation. A neurologist blinded to CTP findings collected demographic and clinical data. CTP images were analyzed by a neuroradiologist blinded to clinical information. CTP maps were described as qualitatively normal, increased, or decreased in mean transit time (MTT), cerebral blood volume (CBV), and cerebral blood flow (CBF). Quantitative analysis involved measurements of average MTT (seconds), CBV (cc/100 g) and CBF (cc/[100g x min]) in standardized regions of interest within each vascular distribution. These were compared with values in the other hemisphere for relative measures of MTT difference, CBV ratio, and CBFfratio. MTT difference of ‡2 seconds, rCBV as £0.60, and rCBF as £0.48 were defined as abnormal based on prior studies. Clinical outcomes including stroke, TIA, or hospitalization during follow-up were determined up to one year following the index event. Dichotomous variables were compared using Fisher's exact test. Logistic regression was used to evaluate the association of CTP abnormalities with outcome in TIA patients. Results: Of 99 patients with validated TIA, 53 had CTP done. Mean age was 72 ± 12 years, 55% were women, and 64% were Caucasian. Mean ABCD 2 score was 4.7 ± 2.1, and 69% had an ABCD 2 ‡ 4. Prolonged MTT was the most common abnormality (19, 36%), and 5 (9.4%) had decreased CBV in the same distribution. On quantitative analysis, 23 (43%) had a significant abnormality. Four patients (7.5%) had prolonged MTT and decreased CBV in the same territory, while 17 (32%) had mismatched abnormalities. When tested in a multivariate model, no significant associations between mismatch abnormalities on CTP and new stroke, TIA, or hospitalizations were observed. Conclusion: CTP abnormalities are common in TIA patients. Although no association between these abnormalities and clinical outcomes was observed in this small study, this needs to be studied further. Objectives: We hypothesized that pre-thrombolytic anti-hypertensive treatment (AHT) may prolong door to treatment time (DTT). Methods: Secondary data analysis of consecutive tPAtreated patients at 24 randomly selected Michigan community hospitals in the INSTINCT trial. DTT among stroke patients who received pre-thrombolytic AHT were compared to those who did not receive pre-thrombolytic AHT. We then calculated a propensity score for the probability of receiving pre-thrombolytic AHT using a logistic regression model with covariates including demographics, stroke risk factors, antiplatelet or beta blocker as home medication, stroke severity (NIHSS), onset to door time, admission glucose, pretreatment systolic and diastolic blood pressure, EMS usage, and location at time of stroke. A paired t-test was then performed to compare the DTT between the propensity-matched groups. A separate generalized estimating equations (GEE) approach was also used to estimate the differences between patients receiving pre-thrombolytic AHT and those who did not while accounting for within-hospital clustering. Results: A total of 557 patients were included in INSTINCT; however, onset, arrival, or treatment times were not able to be determined in 23, leaving 534 patients for this analysis. The unmatched cohort consisted of 95 stroke patients who received pre-thrombolytic AHT and 439 stroke patients who did not receive AHT from 2007-2010 (table) . In the unmatched cohort, patients who received pre-thrombolytic AHT had a longer DTT (mean increase 9 minutes; 95% confidence interval (CI) 2-16 minutes) than patients who did not receive pre-thrombolytic AHT. After propensity matching (table) , patients who received pre-thrombolytic AHT had a longer DTT (mean increase 10.4 minutes, 95% CI 1.9-18.8) than patients who did not receive pre-thrombolytic AHT. This effect persisted and its magnitude was not altered by accounting for clustering within hospitals. Conclusion: Pre-thrombolytic AHT is associated with modest delays in DTT. This represents a feasible target for physician educational interventions and quality improvement initiatives. Further research evaluating optimum hypertension management pre-thrombolytic treatment is warranted. Post-PDs, 7% had only Pre-PDs, and 9% had both. The most common PDs included failure to treat post-treatment hypertension (131, 24%), antiplatelet agent within 24 hours of treatment (61, 11%), pre-treatment blood pressure over 185/110 (39, 7%), anticoagulant agent within 24 hours of treatment (31, 6%), and treatment outside the time window (29, 5%). Symptomatic intracranial hemorrhage (SICH) was observed in 7.3% of patients with PDs and 6.5% of patients without any PD. In-hospital case fatality was 12% with and 10% without a PD. In the fully adjusted model, older age was significantly associated with Pre-PDs (Table) . When Post-PDs were evaluated with adjustment for Pre-PDs, age was not associated with PDs; however, Pre-PDs were associated with Post-PDs. Conclusion: Older age was associated with increased odds of Pre-PDs in Michigan community hospitals. Pre-PDs were associated with Post-PDs. SICH and in-hospital case fatality were not associated with PDs; however, the low number of such events limited our ability to detect a difference. CT Background: MRI has become the gold standard for the detection of cerebral ischemia and is a component of multiple imaging enhanced clinical risk prediction rules for the short-term risk of stroke in patients with transient ischemic attack (TIA). However, it is not always available in the emergency department (ED) and is often contraindicated. Leukoaraiosis (LA) is a radiographic term for white matter ischemic changes, and has recently been shown to be independently predictive of disabling stroke. Although it is easily detected by both CT and MRI, their comparative ability is unknown. Objectives: We sought to determine whether leukoaraiosis, when combined with evidence of acute or old infarction as detected by CT, achieved similar sensitivity to MRI in patients presenting to the ED with TIA. Methods: We conducted a retrospective review of consecutive patients diagnosed with TIA between June 2009 and July 2011 that underwent both CT and MRI as part of routine care within 1 calendar day of presentation to a single, academic ED. CT and MR images were reviewed by a single emergency physician who was blinded to the MR images at the time of CT interpretation. LA was graded using the Van Sweiten scale (VSS), a validated grading scale applicable to both CT and MRI. Anterior and posterior regions were graded independently from 0 to 2. Results: 361 patients were diagnosed with TIA during the study period. Of these, 194 had both CT and MRI Background: Helping others is often a rewarding experience but can also come with a ''cost of caring'' also known as compassion fatigue (CF). CF can be defined as the emotional and physical toll suffered by those helping others in distress. It is affected by three major components: compassion satisfaction (CS), burnout (BO), and traumatic experiences (TE). Previous literature has recognized an increase in BO related to work hours and stress among resident physicians. Objectives: To assess the state of CF among residents with regard to differences in specialty training, hours worked, number of overnights, and demands of child care. We aim to measure associations with the three components of CF (CS, BO, and TE). Methods: We used the previously validated survey, ProQOL 5. The survey was sent to the residents after approval from the IRB and the program directors. Results: A total of 193 responses were received (40% of the 478 surveyed). Five were excluded due to incomplete questionnaires. We found that residents who worked more hours per week had significantly higher BO levels (median 25 vs 21, p = 0.038) and higher TE (22 vs 19, p = 0.048) than those working less hours. There was no difference in CS (42 vs 40, p = 0.73). Eighteen percent of the residents worked a majority of the night shifts. These residents had higher levels of BO Background: Emergency department (ED) billing includes both facility and professional fees. An algorithm derived from the medical provider's chart generates the latter fee. Many private hospitals encourage appropriate documentation by financially incentivizing providers. Academic hospitals sometimes lag in this initiative, possibly resulting in less than optimal charting. Past attempts to teach proper documentation using our electronic medical record (EMR) were difficult in our urban, academic ED of 80 providers (approximately 25 attending physicians, 36 residents, and 20 physician assistants). Objectives: We created a tutorial to teach documentation of ED charts, modified the EMR to encourage appropriate documentation, and provided feedback from the coding department. This was combined with an incentive structure shared equally amongst all attendings based on increased collections. We hypothesized this instructional intervention would lead to more appropriate billing, improve chart content, decrease medical liability, and increase educational value of charting process. Methods: Documentation recommendations, divided into two-month phases of 2-3 proposals, were administered to all ED providers by e-mails, lectures, and reminders during sign-out rounds. Charts were reviewed by coders who provided individual feedback if specific phase recommendations were not followed. Our endpoints included change in total RVU, RVUs/ patient, E/M level distribution, and subjective quality of chart improvement. We did not examine effects on procedure codes or facility fees. Results: Our base average RVU/patient in our ED from 1/1/11-6/30/11 was 2.615 with monthly variability of approximately 2%. Implementation of phase one increased average RVU/patient within two weeks to 2.73 (4.4% increase from baseline, p < 0.05). The second aggregate phase implemented 8 weeks later increased average RVU/patient to 3.04 (16.4% increase from baseline, p < 0.05). Conclusion: Using our teaching methods, chart reviews focused on 2-3 recommendations at a time, and EMR adjustments, we were able to better reflect the complexity of care that we deliver every day in our medical charts. Future phases will focus on appropriate documentation for procedures, critical care, fast track, and pediatric patients, as well as examining correlations between increase in RVUs with charge capture. Identifying Mentoring ''Best Practices'' for Medical School Faculty Julie L. Welch, Teresita Bellido, Cherri D. Hobgood Background: Mentoring has been identified as an essential component for career success and satisfaction in academic medicine. Many institutions and departments struggle with providing both basic and transformative mentoring for their faculty. Objectives: We sought to identify and understand the essential practices of successful mentoring programs. Methods: Multidisciplinary institutional stakeholders in the school of medicine including tenured professors, deans, and faculty acknowledged as successful mentors were identified and participated in focused interviews between Mar-Nov 2011. The major area of inquiry involved their experiences with mentoring relationships, practices, and structure within the school, department, or division. Focused interview data were transcribed and grounded theory analysis was performed. Additional data collected by a 2009 institutional mentoring taskforce were examined. Key elements and themes were identified and organized for final review. Results: Results identified the mentoring practices for three categories: 1) General themes for all faculty, 2) Specific practices for faculty groups: Basic Science Researchers, Clinician Researchers, Clinician Educators, and 3) National examples. Additional mentoring strategies that failed were identified. The general themes were quite universal among faculty groups. These included: clarify the best type of mentoring for the mentee, allow the mentee to choose the mentor, establish a panel of mentors with complementary skills, schedule regular meetings, establish a clear mentoring plan with expectations and goals, offer training and resources for both the mentor and mentee at institutional and departmental levels, ensure ongoing mentoring evaluation, create a mechanism to identify and reward mentoring. National practice examples offered critical recommendations to address multi-generational attitudes and faculty diversity in terms of gender, race, and culture. Conclusion: Mentoring strategies can be identified to serve a diverse faculty in academic medicine. Interventions to improve mentoring practices should be targeted at the level of the institution, department, and individual faculty members. It is imperative to adopt results such as these to design effective mentoring programs to enhance the success of emergency medicine faculty seeking robust academic careers. Background: Women comprise half of the talent pool from which the specialty of emergency medicine draws future leaders, researchers, and educators and yet only 5% of full professors in US emergency medicine are female. Both research and interventions are aimed at reducing the gender gap, however, it will take decades for the benefits to be realized which creates a methodological challenge in assessing system's change. Current techniques to measure disparities are insensitive to systems change as they are limited to percentages and trends over time. Objectives: To determine if the use of Relative Rate Index (RRI) better predicts which stage in the system women are not advancing in the academic pipeline than traditional metrics. Methods: RRI is a method of analysis that assesses the percent of sub-populations in each stage relative to their representation in the stage directly prior. Thus, there is a better notion of the advancement given the availability to advance. RRI also standardizes data for ease of interpretation. This study was conducted on the total population of academic professors in all departments at Yale School of Medicine during the academic year of 2010-2011. Data were obtained from the Yale University Provost's office. Results: N = 1305. There were a total of 402 full, 429 associate, and 484 assistant professors. Males comprised 78%, 59%, and 54% respectively. RRI for the Department of Emergency Medicine (DEM) is 0.67, 1.93, and 0.78, for Full, Associate, and Assistant Professors, respectively while the percentages were 44%, 60%, and 33% respectively. Conclusion: Relying solely on percentages masks improvements to the system. Women are most represented at the associate professor level in DEM, highlighting the importance of systems change evidence. Specifically, twice as many women are promoted to associate professor rank given the number who exists as assistant professors. Within 5 years, the DEM should have an equal system as the numbers of associate professors have dramatically increased and will be eligible to promote to full professor. Additionally, DEM has a better record of retaining and promoting women than other Yale Departments of Medicine at both associate and full professor ranks. Objectives: We examine the payer mixes of community non-rehabilitation EDs in metropolitan areas by region to identify the proportion of academic and nonacademic EDs that could be considered safety net EDs. We hypothesize that the proportion of safety net academic EDs is greater than that for non-academic EDs and is increasing over time. Methods: This is an ecological study examining US ED visits from 2006 through 2008. Data were obtained from the Nationwide Emergency Department Sample (NEDS). We grouped each ED visit according to the unique hospital-based ED identifier, thus creating a payer mix for each ED. We define a ''Safety Net ED'' as any ED where the payer mix satisfied any one of the following three conditions: 1) >30% of all ED visits are Medicaid patients; 2) >30% of all ED visits are self-pay patients; or 3) >40% of all ED visits are either Medicaid or self-pay patients. NEDS tags each ED with a hospital-based variable to delineate metropolitan/non-metropolitan locations and academic affiliation. We chose to examine a subpopulation of EDs tagged as either academic metropolitan or non-academic metropolitan, because the teaching status of non-metropolitan hospitals was not provided. We then measured the proportion of EDs that met safety net criteria by academic status and region. Results: We examined 2,821, 2,793, and 2,844 weighted metro EDs in years 2006-2008, respectively. Table 1 presents safety net proportions. The proportions of academic safety net EDs increased across the study period. Widespread regional variability in safety net proportions existed across all years. The proportions of safety net EDs were highest in the South and lowest in the Northeast and Midwest. Table 2 describes these findings for 2008. Conclusion: These data suggest that the proportion of safety-net academic EDs may be greater than that of non-academic EDs, is increasing over time, and is Objectives: To examine the effect of MA health reform implementation on ED and hospital utilization before and after health reform, using an approach that relies on differential changes in insurance rates across different areas of the state in order to make causal inferences as to the effect of health reform on ED visits and hospitalizations. Our hypothesis was that health care reform (i.e. reducing rates of uninsurance) would result in increased rates of ED use and hospitalizations. Methods: We used a novel difference-in-differences approach, with geographic variation (at the zip code level) in the percentage uninsured as our method of identifying changes resulting from health reform, to determine the specific effect of Massachusetts' health care reform on ED utilization and hospitalizations. Using administrative data available from the Massachusetts Division of Health Care Finance and Policy Acute Hospital Case Mix Databases, we compared a one-year period before health reform with an identical period after reform. We fit linear regression models at the area-quarter level to estimate the effect of health reform and the changing uninsurance rate (defined as self-pay only) on ED visits and hospitalizations. Results: There were 2,562,330 ED visits and 777,357 hospitalizations pre-reform and 2,713,726 ED visits and 787,700 hospitalizations post-reform. The rate of uninsurance decreased from 6.2% to 3.7% in the ED group and from 1.3% to 0.6% in the hospitalization group. A reduction in the rate of the uninsured was associated with a small but statistically significant increase in ED utilization (p = 0.03) and no change in hospitalizations (p = 0.13). Conclusion: We find that increasing levels of insurance coverage in Massachusetts were associated with small but statistically significant increases in ED visits, but no differences in rates of hospitalizations. These results should aid in planning for anticipated changes that might result from the implementation of health reform nationally. with high levels of co-morbidity when untreated in adolescents. Despite broad CDC screening recommendations, many youth do not receive testing when indicated. The pediatric emergency department (PED) is a venue with a high volume of patients potentially in need of STI testing, but assessing risk in the PED is difficult given constraints on time and privacy. We hypothesized that patients visiting a PED would find an Audio-enhanced Computer-Assisted Self-Interview (ACASI) program to establish STI risk easy to use, and would report a preference for the ACASI over other methods of disclosing this information. Objectives: To assess acceptability, ease of use, and comfort level of an ACASI designed to assess adolescents' risk for STIs in the PED. Methods: We developed a branch-logic questionnaire and ACASI system to determine whether patients aged 15-21 visiting the PED need STI testing, regardless of chief complaint. We obtained consent from participants and guardians. Patients completed the ACASI in private on a laptop. They read a one-page computer introduction describing study details and completed the ACASI. Patients rated use of the ACASI upon completion using five-point Likert scales. Results: 2030 eligible patients visited the PED during the study period. We approached 873 (43%) and enrolled and analyzed data for 460/873 (53%). The median time to read the introduction and complete the ACASI was 8.2 minutes (interquartile range 6.4-11.5 minutes). 90.7% of patients rated the ACASI ''very easy'' or ''easy'' to use, 90.6% rated the wording as ''very easy'' or ''easy'' to understand, 60% rated the ACASI ''very short'' or ''short'', 60.3% rated the audio as ''very helpful'' or ''helpful,'' 82.9% were ''very comfortable'' or ''comfortable'' with the system confidentiality, and 71.2% said they would prefer a computer interface over in-person interviews or written surveys for collection of this type of information. Conclusion: Patients rated the computer interface of the ACASI as easy and comfortable to use. A median of 8.2 minutes was needed to obtain meaningful clinical information. The ACASI is a promising approach to enhance the collection of sensitive information in the PED. The Participants were randomized to one of three conditions, BI delivered by a computer (CBI), BI delivered by a therapist assisted by a computer (TBI), or control, and completed 3, 6, and 12 month follow-up. In addition to content on alcohol misuse and peer violence, adolescents reporting dating violence received a tailored module on dating violence. The main outcome for this analysis was frequency of moderate and severe dating victimization and aggression at the baseline assessment and 3, 6, and 12 months post ED visit. Results: Among eligible adolescents, 55% (n = 397) reported dating violence and were included in these analyses. Compared to controls, after controlling for baseline dating victimization, participants in the CBI showed reductions in moderate dating victimization at 3 months (OR 0.7; CI 0.51-0.99; p < 0.05, effect size 0.12) and 6 months (OR 0.56; CI 0.38-0.83; p < 0.01, effect size 0.18); models examining interaction effects were significant for the CBI on moderate dating victimization at 3 and 6 months. Significant interaction effects were found for the TBI on moderate dating victimization at 6 and 12 months and severe dating victimization at 3 months. The computer-based intervention shows promise for delivering content that decreases moderate dating victimization over 6 months. The therapist BI is promising for decreasing moderate dating victimization over 12 months and severe dating victimization over 3 months. ED-based BIs delivered on a computer addressing multiple risk behaviors could have important public health effects. Figure 1 . The 21-only ordinance was associated with a significant reduction of AR visits. This ordinance was also associated with reduction in underage AR visits, UI student visits, and public intoxication bookings. These data suggest that other cities should consider similar ordinances to prevent unwanted consequences of alcohol. Background: Prehospital providers perform tracheal intubation in the prehospital environment, and failed attempts are of concern due to the danger of hypoxia and hypotension. Some question the appropriateness of intubation in this setting due to the morbidity risk associated with intubation in the field. Thus it is important to gain an understanding of the factors that predict the success of prehospital intubation attempts to inform this discussion. Objectives: To determine the factors that affect success rates on first attempt of paramedic intubations in a rapid sequence intubation (RSI) capable critical care transport service. Methods: We conducted a multivariate logistic analysis on a prospectively collected database of airway management from an air and land critical care transport service that provides scene responses and interfacility transport in the Province of Ontario. Background: Motor vehicle collisions (MVCs) are one of the most common types of trauma for which people seek ED care. The vast majority of these patients are discharged home after evaluation. Acute psychological distress after trauma causes great suffering and is a known predictor of posttraumatic stress disorder (PTSD) development. However, the incidence and predictors of psychological distress among patients discharged to home from the ED after MVCs have not been reported. Objectives: To examine the incidence and predictors of acute psychological distress among individuals seen in the ED after MVCs and discharged to home. Methods: We analyzed data from a prospective observational study of adults 18-64 years of age presenting to one of eight ED study sites after MVC between 02/ 2009 and 10/2011. English-speaking patients who were alert and oriented, stable, and without injuries requiring hospital admission were enrolled. Patient interview included assessment of patient sociodemographic and psychological characteristics and MVC characteristics. Level of psychological distress in the ED was assessed using the 13-item Peritraumatic Distress Inventory (PDI). PDI scores >23 are associated with increased risk of PTSD and were used to define substantial psychological distress. Descriptive statistics and logistic regression were performed using Stata IC 11.0 (StataCorp LP, College Station, Texas). Results: 9339 MVC patients were screened, 1584 were eligible, and 949 were enrolled. 361/949 (38%) participants had substantial psychological distress. After adjusting for crash severity (severity of vehicle damage, vehicle speed), substantial patient distress was predicted by sociodemographic factors, pre-MVC depressive symptoms, and arriving to the ED on a backboard (table) . Conclusion: Substantial psychological distress is common among individuals discharged from the ED after MVCs and is predicted by patient characteristics separate from MVC severity. A better under standing of the frequency and predictors of substantial psychological distress is an important first step in identifying these patients and developing effective interventions to reduce severe distress in the aftermath of trauma. Such interventions have the potential to reduce both immediate patient suffering and the development of persistent psychological sequelae. figure) The predictive characteristics of PETS, PESI, and sPESI for 30-day mortality in EMPEROR, including AUC, negative predictive value, sensitivity, and specificity were calculated. Results: The 646 of 1438 patients (44.9%; 95% CI 42.3%-47.5%) classified as PETS LOW had 30-day mortality of 0.5% (95% CI 0.1-1.5%), versus 10.2% (95% CI 8.0%-12.4%) in the PETS HIGH group, statistically similar to PESI and sPESI. PETS is significantly more specific for mortality than the sPESI (47.0% v 37.6%; p < 0.0001), classifying far more patients as low-risk while maintaining a sensitivity of 96% (95% CI 88.3%-99.0%), not significantly different from sPESI or PESI (p > 0.05). Conclusion: With four variables, PETS in this derivation cohort is as sensitive for 30-day mortality as the more complicated PESI and sPESI, with significantly greater specificity than the sPESI for mortality, placing 25% more patients in the low-risk group. External validation is necessary. Nicole Seleno, Jody Vogel, Michael Liao, Emily Hopkins, Richard Byyny, Ernest Moore, Craig Gravitz, Jason Haukoos Denver Health Medical Center, Denver, CO Background: The Sequential Organ Failure Assessment (SOFA) Score, base excess, and lactate have been shown to be associated with mortality in critically ill trauma patients. The Denver Emergency Department (ED) Trauma Organ Failure (TOF) Score was recently derived and internally validated to predict multiple organ failure in trauma patients. The relationship between the Denver TOF Score and mortality has not been assessed or compared to other conventional measures of mortality in trauma. Objectives: To compare the prognostic accuracies of the Denver ED TOF Score, ED SOFA Score, and ED base excess and lactate for mortality in a large heterogeneous trauma population. Methods: A secondary analysis of data from the Denver Health Trauma Registry, a prospectively collected database. Consecutive adult trauma patients from 2005 through 2008 were included in the study. Data collected included demographics, injury characteristics, prehospital care characteristics, response to injury characteristics, ED diagnostic evaluation and interventions, and in-hospital mortality. The values of the four clinically relevant measures (Denver ED TOF Score, ED SOFA score, ED base excess, and ED lactate) were determined within four hours of patient arrival, and prognostic accuracies for in-hospital mortality for the four measures were evaluated with receiver operating characteristic (ROC) curves. Multiple imputation was used for missing values. Results: Of the 4,355 patients, the median age was 37 (IQR 26-51) years, median injury severity score was 9 (IQR 4-16), and 81% had blunt mechanisms. Thirty-eight percent (1,670 patients) were admitted to the ICU with a median ICU length of stay of 2.5 (IQR 1-8) days, and 3% (138 patients) died. In the non-survivors, the median values for the four measures were ED SOFA 5.0 (IQR 0.0-8.0); Denver ED TOF 4.0 (IQR 4.0-5.0); ED base excess 7.0 (IQR 8.0-19.0) mEq/L; and ED lactate 6.5 (IQR 4.5-11.8) mmol/L. The areas under the ROC curves for these measures are demonstrated in the figure. Conclusion: The Denver ED TOF Score more accurately predicts in-hospital mortality in trauma patients as compared to the ED SOFA Score, ED base excess, or ED lactate. The Denver ED TOF Score may help identify patients early who are at risk for mortality, allowing for targeted resuscitation and secondary triage to improve outcomes in these critically ill patients. The Background: Both animal and human studies suggest that early initiation of therapeutic hypothermia (TH) and rapid cooling improve outcomes after cardiac arrest. Objectives: The objective was to determine if administration of cold IV fluids in a prehospital setting decreased time-to-target-temperature (TT) with secondary analysis of effects on mortality and neurological outcome. Methods: Patients resuscitated after out-of-hospital cardiac arrest (OOHCA) who received an in-hospital post cardiac arrest bundle including TH were prospectively enrolled into a quality assurance database from November 2007 to November 2011. On April 1, 2009 a protocol for intra-arrest prehospital cooling with 4°C normal saline on patients experiencing OOHCA was initiated. We retrospectively compared TT for those receiving prehospital cold fluids and those not receiving cold fluids. TT was defined as 34°C measured via Foley thermistor. Secondary outcomes included mortality, good neurological outcome defined as Cerebral Performance Category (CPC) score of 1 or 2 at discharge, and effects of pre-ROSC cooling. Results: There were 132 patients who were included in this analysis with 80 patients receiving prehospital cold IV fluids and 52 who did not. Initially, 63% of patients were in VF/VT and 36% asystole/PEA. Patients receiving prehospital cooling did not have a significant improvement in TT (256 minutes vs 271 minutes, p = 0.64). Survival to discharge and good neurologic outcome were not associated with prehospital cooling (54% vs 50%, p = 0.67) and CPC of 1 or 2 in 49% vs 44%, (p = 0.61). Initiating cold fluids prior to ROSC showed both a nonsignificant decrease in survival (48% vs 56%, p = 0.35) and increase in poor neurologic outcomes (42% vs 50%, p = 0.39). 77% of patients received £ 1L of cooled IVF prior to hospital arrival. Patients receiving prehospital cold IVF had a longer time from arrest to hospital arrival (44 vs 34 min, p =< 0.001) in addition to a prolonged ROSC to hospital time (20 vs 12 min, p = 0.005). Conclusion: At our urban hospital, patients achieving ROSC following OOHCA did not demonstrate faster TT or outcome improvement with prehospital cooling compared to cooling initiated immediately upon ED arrival. Further research is needed to assess the utility of prehospital cooling. Assessment Background: An estimated 10% of emergency department (ED) patients 65 years of age and older have delirium, which is associated with short-and long-term risk of morbidity and mortality. Early recognition could result in improved outcomes, but the reliability of delirium recognition in the continuum of emergency care is unknown. Objectives: We tested whether delirium can be reliably detected during emergency care of elderly patients by measuring the agreement between prehospital providers, ED physicians, and trained research assistants using the Confusion Assessment Method for the ICU (CAM-ICU) to identify the presence of delirium. Our hypothesis was that both ED physicians and prehospital providers would have poor ability to detect elements of delirium in an unstructured setting. Methods: Prehospital providers and ED physicians completed identical questionnaires regarding their clinical encounter with a convenience sample of elderly (age >65 years) patients who presented via ambulance to two urban, teaching EDs over a three-month period. Respondents noted the presence or absence of (1) an acute change in mental status, (2) inattention, (3) disorganized thinking, and (4) altered level of consciousness (using the Richmond Agitation Sedation Scale). These four components comprise the operational definition of delirium. A research assistant trained in the CAM-ICU rated each component for the same patients using a standard procedure. We calculated inter-rater reliability (kappa) between prehospital providers, ED physicians, and research assistants for each component. Objectives: This study aimed to assess the association between age and EMS use while controlling for potential confounders. We hypothesized that this association use would persist after controlling for confounders. Methods: A cross-sectional survey study was conducted at an academic medical center's ED. An interview-based survey was administered and included questions regarding demographic and clinical characteristics, mode of ED arrival, health care use, and the perceived illness severity. Age was modeled as an ordinal variable (<60, 60-79, and ‡ 80 years). Bivariate analyses were used to identify potential confounders and effect measure modifiers and a multivariable logistic regression model was constructed. Odds ratios were calculated as measures of effect. Results: A total of 1092 subjects were enrolled and had usable data for all covariates, 465 (43%) of whom arrived via EMS. The median age of the sample was 60 years and 52% were female. There was a statistically significant linear trend in the proportion of subjects who arrived via EMS by age (p < 0.0001). Compared to adults aged less than 60 years, the unadjusted odds ratio associating age and EMS use was 1.41 (95% CI: Background: We previously derived a clinical decision rule (CDR) for chest radiography (CXR) in patients with chest pain and possible acute coronary syndrome (ACS) consisting of the absence of three predictors: history of congestive heart failure, history of smoking, and abnormalities on lung auscultation. Objectives: To prospectively validate and refine a CDR for CXR in an independent patient population. Methods: We prospectively enrolled patients over 24 years of age with a primary complaint of chest pain and possible ACS from September 2009 to January 2010 at a tertiary care ED with 73,000 annual patient visits. Physicians completed standardized data collection forms before ordering chest radiographs and were thus blinded to CXR findings at the time of data collection. Two investigators, blinded to the predictor variables, independently classified CXRs as ''normal,'' ''abnormal not requiring intervention,'' and ''abnormal requiring intervention'' (e.g, heart failure, infiltrates) based on review of the radiology report and the medical record. Analyses included descriptive statistics, inter-rater reliability assessment (kappa), and recursive partitioning. Results: Of 1159 visits for possible ACS, mean age (SD) was 60.3 (15.6) and 51% were female. Twenty-four percent had a history of acute myocardial infarction, 10% congestive heart failure, and 11% atrial fibrillation. Seventy-one (6.1%, 95% CI 4.9-7.7) patients had a radiographic abnormality requiring intervention. ing the likelihood of coronary artery disease (CAD) could reduce the need for stress testing or coronary imaging. Acyl-CoA:cholesterol acyltransferase-2 (ACAT2) activity has been shown in monkey and murine models to correlate with atherosclerosis. Objectives: To determine if a novel cardiac biomarker consisting of plasma cholesteryl ester levels (CE) typically derived from the activity of ACAT2 is predictive of CAD in a clinical model. Methods: A single center prospective observational cohort design enrolled a convenience sample of subjects from a tertiary care center with symptoms of acute coronary syndrome undergoing coronary CT angiography or invasive angiography. Plasma samples were analyzed for CE composition with mass spectrometry. The primary endpoint was any CAD determined at angiography. Multivariable logistic regression analyses were used to estimate the relationship between the sum of the plasma concentrations from cholesteryl palmitoleate (16:1) and cholesteryl oleate (18:1) (defined as ACAT2-CE) and the presence of CAD. The added value of ACAT2-CE to the model was analyzed comparing the C-statistics and integrated discrimination improvement (IDI). Results: The study cohort was comprised of 113 participants enrolled over 24 months with a mean age 49 (±11.7) years, 59% with CAD at angiography. The median plasma concentration of ACAT2-CE was 938 lM (758, 1099) in patients with CAD and 824 lM (683, 998) in patients without CAD (p = 0.03) (Figure) . When considered with age, sex, and the number of conventional CAD risk factors, ACAT2-CE were associated with a 6.5% increased odds of having CAD per 10 lM increase in concentration. The addition of ACAT2-CE significantly improved the C-statistic (0.89 vs 0.95, p = 0.0035) and IDI (0.15, p < 0.001) compared to the reduced model. In the subgroup of low-risk observation unit patients, the CE model had superior discrimination compared to the Diamond Forrester classification (IDI 0.403, p < 0.001). Conclusion: Plasma levels of ACAT2-CE, considered in a clinical model, have strong potential to predict a patient's likelihood of having CAD. In turn, this could reduce the need for cardiac imaging after the exclusion of MI. Further study of ACAT2-CE as biomarkers in patients with suspected ACS is needed. Background: Outpatient studies have demonstrated a correlation between carotid intima-media thickness (CIMT) on ultrasound and coronary artery disease (CAD). There are no known published studies that investigate the role of CIMT in the ED using cardiac CT or percutaneous cardiac intervention (PCI) as a gold standard. Objectives: We hypothesized that CIMT can predict cardiovascular events and serve as a noninvasive tool in the ED. Methods: This was a prospective study of adult patients who presented to the ED and required evaluation for chest pain. The study location was an urban ED with a census of 120,000 annual visits and 24-hour cardiac catheterization. Patients who did not have CT or PCI or had carotid surgery were excluded from the study. Ultrasound CIMT measurements of right and left common carotid arteries were taken with a 10MHz linear transducer (Zonare, Mountain View, CA). Anterior, medial, and posterior views of the near and far wall were obtained (12 CIMT scores total). Images were analyzed by Carotid Analyzer 5 (Mailing Imaging Application LLC, Coralville, Iowa). Patients were classified into two groups based on the results from CT or PCI. A subject was classified as having significant CAD if there was over 70% occlusion or multi-vessel disease. Results: Ninety of 102 patients were included in the study; 55.7% were males. Mean age was 56.6 ± 13 years. There were 34 (37.8%) subjects with significant CAD and 56 (62.2%) with non-significant CAD. The mean of all 12 CIMT measurements was significantly higher in the CAD group than in the non-CAD group (0.60 ± 0.20 vs. 0.35 ± 0.23; p < 0.00001). A logistic regression analysis was carried out with significant CAD as the event of interest and the following explanatory variables in the model: Objectives: To determine the diagnostic yield of routine testing in-hospital or following ED discharge among patients presenting to an ED following syncope. Methods: A prospective, observational, cohort study of consecutive ED patients ‡18 years old presenting with syncope was conducted. The four most commonly utilized tests (echocardiography, telemetry, ambulatory electrocardiography monitoring, and cardiac markers) were studied. Interobserver agreement as to whether tests results determined the etiology of the syncope was measured using kappa (k) values. Results: Of 570 patients with syncope, 150 (26%) had echocardiography with 33 (6%) demonstrating a likely etiology of the syncopal event such as critical valvular disease or significantly depressed left ventricular function (k = 0.78). On hospitalization, 349 (61%) patients were placed on telemetry, 19 (3%) of these had worrisome dysrhythmias (k = 0.66). 317 (55%) patients had troponin levels drawn of whom 19 (3%) had positive results (k = 1); 56 (10%) patients were discharged with monitoring with significant findings in only 2 (0.4%) patients (k = 0.65). Overall, 73 (8%, 95% CI 7-10%) studies were diagnostic. Conclusion: Although routine testing is prevalent in ED patients with syncope, the diagnostic yield is relatively low. Nevertheless, some testing, particularly echocardiography, may yield critical findings in some cases. Current efforts to reduce the cost of medical care by eliminating non-diagnostic medical testing and increasing emphasis on practicing evidence-based medicine argue for more discriminate testing when evaluating syncope. (Originally submitted as a ''late-breaker.'') Unusual fatigue was reported by 70.7% (severe 29.7%) and insomnia by 47.8% (severe 21.0%). These findings have led to risk management recommendations to consider these symptoms as predictive of acute coronary syndromes (ACS) among women visiting the ED. Objectives: To document the prevalence of these symptoms among all women visiting an ED. To analyze the potential effect of using these symptoms in the ED diagnostic process for ACS. Methods: A survey on fatigue and insomnia symptoms was administered to a convenience sample of all adult women visiting an urban academic ED (all arrival modes, acuity levels, all complaints). A sensitivity analysis was performed using published data and expert opinion for inputs. Results: We approached 548 women, with 379 enrollments. See table. The top box shows prevalences of prodromal symptoms among all adult female ED patients. The bottom box shows outputs from sensitivity analysis on the diagnostic effect of initiating an ACS workup for all female ED patients reporting prodromal symptoms. Conclusion: Prodromal symptoms of ACS are highly prevalent among all adult women visiting the ED in this study. This likely limits their utility in ED settings. While screening or admitting women with prodromal symptoms in the ED would probably increase sensitivity, that increase would be accompanied by a dramatic reduction in specificity. Such a reduction in specificity would translate to admitting, observing, or working up somewhere between 29% and 61% of all women visiting the ED, which is prohibitive in terms of personal costs, risks of hospitalization, and financial costs. While these symptoms may or may not have utility in other settings such as primary care, their prevalence, and the implied lack of specificity for ACS suggest they will not be clinically useful in the ED. Length Methods: We examined a cohort of low-risk chest pain patients evaluated in an ED-based OU using prospective and retrospective OU registry data elements. Cox proportional hazard modeling was performed to assess the effect of testing modality (stress testing vs. CCTA) on the LOS in the CDU. As CCTA is not available on weekends, only subjects presenting on weekdays were included. Cox models were stratified on time of patient presentation to the ED, based on four hour blocks beginning at midnight. The primary independent variable was first test modality, either stress imaging (exercise echo, dobutamine echo, stress MRI) or CCTA. Age, sex, and race were included as covariates. The proportional hazards assumption was tested using scaled Schoenfield residuals, and the models were graphically examined for outliers and overly influential covariate patterns. Test selection was a time varying covariate in the 8AM strata, and therefore the interaction with ln (LOS) was included as a correction term. After correction for multiple comparisons, an alpha of 0.01 was held to be significant. Results: Over the study period, 841 subjects (of 1,070 in the registry) presented on non-weekend days. The median LOS was 18.5 hours (IQR 12.4-23.3 hours), 57% were white, and 61% were female. The table shows the number of subjects in each time strata, the number tested, and the number undergoing stress testing vs. CCTA. After adjusting all models for age, race, and sex, the hazard ratio (HR) for LOS is as shown. Only those patients presenting between 8AM and noon noted a significant improvement in LOS with CCTA use (p < 0.0001). Objectives: Determine the validity of a managementfocused EM OSCE as a measure of clinical skills by determining the correlation between OSCE scores and faculty assessment of student performance in the ED. Methods: Medical students in a fourth year EM clerkship were enrolled in the study. On the final day of the clerkship students participated in a five-station EM OSCE. Student performance on the OSCE was evaluated using a task-based evaluation system with 3-4 critical management tasks per case. Task performance was evaluated using a three-point system: performed correctly/timely (2), performed incorrectly/late (1), or not performed (0). Descriptive anchors were used for performance criteria. Communication skills were also graded on a three-point scale. Student performance in the ED was based on traditional faculty assessment using our core-competency evaluation instrument. A Pearson correlation coefficient was calculated for the relationship between OSCE score and ED performance score. Case item analysis included determination of difficulty and discrimination. The ACGME also requires that trainees are evaluated on these 6CCs during their residency. Trainee evaluation in the 6CCs are frequently on a subjective rating scale. One of the recognized problems with a subjective scale is the rating stringency of the rater, commonly known as the Hawk-Dove effect. This has been seen in Standardized Clinical Exam scoring. Recent data have shown that score variance can be related to evaluator performance with a negative correlation. Higher-scoring physicians were more likely to be a stringent or Hawk type rater on the same evaluation. It is unclear if this pattern also occurs in the subjective ratings that are commonly used in assessments of the 6CCs. Objectives: Comparison of attending physician scores on the ACGME 6CCs with attending ratings of residents for a negative correlation or Hawk-Dove effect. Methods: Residents are routinely evaluated on the 6CCs with a 1-9 numerical rating scale as part of their training. The evaluation database was retrospectively reviewed. Residents anonymously scored attending physicians on the 6CCs with a cross-sectional survey that utilized the same rating scale, anchors, and prompts as the resident evaluations. Average scores for and by each attending were calculated and a Pearson Correlation calculated by core competency and overall. Results: In this IRB-approved study, a total of 43 attending physicians were scored on the 6CCs with 447 evaluations by residents. Attendings evaluated 162 residents with a total of 1,678 evaluations completed over a 5-year period. Attending mode score was 9 ranging from 2 to 9; resident scores had a mode of 8 with a range of 1 to 9. There was no correlation between the rated performance of the attendings overall or in each 6CCs and the scores they gave (p = 0.065-0.861). Conclusion: Hawk-Dove effects can be seen in some scoring systems and has the potential to affect trainee evaluation on the ACGME core competencies. However, a negative correlation to support a Hawk-Dove scoring pattern was not found in EM resident evaluations by attending physicians. This study is limited by being a single center study and utilizing grouped data to preserve resident anonymity. Background: All ACGME-accredited residency programs are required to provide competency-based education and evaluation. Graduating residents must demonstrate competency in six key areas. Multiple studies have outlined strategies for evaluating competency, but data regarding residents' self-assessments of these competencies as they progress through training and beyond is scarce. Objectives: Using data from longitudinal surveys by the American Board of Emergency Medicine, the primary objective of this study was to evaluate if resident self-assessments of performance in required competencies improve over the course of graduate medical training and in the years following. Additionally, resident self-assessment of competency in academic medicine was also analyzed. Methods: This is a secondary data analysis of data gathered from two rounds of the ABEM Longitudinal Study of Emergency Medicine Residents (1996-98 and 2001-03) and three rounds of the ABEM Longitudinal Study of Emergency Physicians (1999, 2004, 2009 ). In both surveys, physicians were asked to rate a list of 18 items in response to the question, ''What is your current level of competence in each of the following aspects of work in EM?'' The rated items were grouped according to the ACGME required competencies of Patient Care, Medical Knowledge, Practice-based Learning and Improvement, Interpersonal and Communication Skills, and System-based Practice. An additional category for academic medicine was also added. Results: Rankings improved in all categories during residency training. Rankings in three of the six categories improved from the weak end of the scale to the strong end of the scale. There is a consistent decline in rankings one year after graduation from residency. The greatest drop is in Medical Knowledge. Mean self-ranking in academic medicine competency is uniformly the lowest ranked category for each year. Conclusion: While self-assessment is of uncertain value as an objective assessment, these increasing rankings suggest that emergency medicine residency programs are successful at improving residents' confidence in the required areas. Residents do not feel as confident about academic medicine as they do about the ACGME required competencies. The uniform decline in rankings the first year after residency is an area worthy of further inquiry. Screening Medical Student Rotators From Outside Institutions Improves Overall Rotation Performance Shaneen Doctor, Troy Madsen, Susan Stroud, Megan L. Fix University of Utah, Salt Lake City, UT Background: Emergency medicine is a rapidly growing field. Many student rotations are limited in their ability to accommodate all students and must limit the number of students they allow per rotation. We hypothesize that pre-screening visiting student rotators will improve overall student performance. Objectives: To assess the effect of applicant screening on overall rotation grade and mean end of shift card scores. Methods: We initiated a medical student screening process for all visiting students applying to our 4-week elective EM rotation starting in 2008. This consisted of reviewing board scores and requiring a letter of intent. Students from our home institution were not screened. All end-of-shift evaluation cards and final rotation grades (honors, high pass, pass, fail) from 2004 to 2011 were analyzed. We identified two cohorts: home students (control) and visiting students. We compared pre-intervention (2004) (2005) (2006) (2007) (2008) and postintervention (2008-2011) scores and grades. End of shift performance scores are recorded using a fivepoint scale that assesses indicators such as fund of knowledge, judgment, and follow-through to disposition. Mean ranks were compared and P-values were calculated using the Armitage test of trend and confirmed using t-tests. Results: We identified 162 visiting students (91 pre, 81 post) and 160 home students (90 pre, 80 post). 12 (13.2%) visiting students achieved honors pre-intervention while 31 (38.3%) achieved honors post-intervention (p = 0.000093). No significant difference was seen in home student grades: 28 (31.1%) received honors pre-2008 and 17 (21.3%) received honors post-2008 Conclusion: We found that implementation of a screening process for visiting medical students improved overall rotation scores and grades as compared to home students who did not receive screening. Screening rotating students may improve the overall quality of applicants and thereby the residency program. Background: There are many descriptions in the literature of computer-assisted instruction in medical education, but few studies that compare them to traditional teaching methods. Objectives: We sought to compare the suturing skills and confidence of students receiving video preparation before a suturing workshop versus a traditional instructional lecture. Methods: 88 first and second year medical students were randomized into two groups. The control group was given a lecture followed by 40 minutes of suturing time. The video group was provided with an online suturing video at home, no lecture, and given 40 minutes of suturing time during the workshop. Both groups were asked to rate their confidence before and after the workshop, and their belief in the workshop's effectiveness. Each student was also videotaped suturing a pig's foot after the workshop and graded on a previously validated 16-point suturing checklist. 83 videos were scored. Results: There was no significant difference between the test scores of the lecture group (M = 11.21, SD = 3.17, N = 42) and the video group (M = 11.27, SD = 2.53, N = 41) using the two-sample independent ttest for equal variances (t(81) = )0.09, p = 0.93). There was a statistically significant difference in the proportion of students scoring correctly for only one point: ''Curvature of needle followed'': 25/42 in the lecture group and 35/41 in the video group (chi = 6.92, df = 1, p = 0.008). Students in the video group were found to be 2.45 times more likely to have a neutral or favorable feeling of suturing confidence before the workshop (p = 0.067, CI 0.94-6.4) using a proportional odds model. No association was detected between group assignment and level of suturing confidence after the workshop (p = 0.475). There was also no association detected between group assignment and opinion of the suturing workshop (p = 0.681) using a logistic regression odds model. Among those students who indicated a lack of confidence before training, there was no detected association (p = 0.967) between group assignment and having an improved confidence using a logistic regression odds model. Conclusion: Students in the video group and students in the control group achieved similar levels of suturing skill and confidence, and equal belief in the workshop's effectiveness. This study suggests that video instruction could be a reasonable substitute for lectures in procedural education. Background: Accurate interpretation of the ECG in the emergency department is not only clinically important but also critical to assess medical knowledge competency. With limitations to expansion of formal didactics, educational technology offers an innovative approach to improve the quality of medical education. Objectives: The aim of this study was to assess an online multimedia-based ECG training module evaluating ST elevation myocardial infarction (STEMI) identification among medical students. Methods: A convenience sample of fifty-two medical students on their EM rotations at an academic medical center with an EM residency program was evaluated in a before-after fashion during a 6-month period. One cardiologist and two ED attending physicians independently validated a standardized exam of ten ECGs: four were normal ECGs, three were classic STEMIs, and three were subtle STEMIs. The gold standard for diagnosis was confirmed acute coronary thrombus during cardiac catheterization. After evaluating the 10 ECGs, students completed a pre-intervention test wherein they were asked to identify patients who required emergent cardiac catheterization based on the presence or absence of ST segment elevation on ECG. Students then completed an online interactive multimedia module containing 13 minutes of STEMI training based on American Heart Association/American College of Cardiology guidelines on STEMI. Medical students were asked to complete a post-test of the 10 ECGs after watching online multimedia. Objectives: Our objective was to quantify the number of pre-verbal pediatric head CTs performed at our community hospital that could have been avoided by utilizing the PECARN criteria. Methods: We conducted a standardized chart review of all children under the age of 2 who presented to our community hospital and received a head CT between Jan 1st, 2010 and Dec 31st, 2010. Following recommended guidelines for conducting a chart review, we: 1) utilized four blinded chart reviewers, 2) provided specific training, 3) created a standardized data extraction tool, and 4) held periodic meetings to evaluate coding discrepancies. Our primary outcome measure was the number of patients who were PECARN negative and received a head CT at our institution. Our secondary outcome was to reevaluate the sensitivity and specificity of the PECARN criteria to detect ciTBI in our cohort. Data were analyzed using descriptive statistics and 95% confidence intervals were calculated around proportions using the modified Wald method. Results: A total of 138 patients under the age of 2 received a head CT at our institution during the study period. 23 patients were excluded from the final analysis because their head CTs were not for trauma. The prevalence of a ciTBI in our cohort was 2.6% (95% CI 0.6%-7.7%) ( (DTI) measures disruption of axonal integrity on the basis of anisotropic diffusion properties. Findings on DTI may relate to the injury, as well as the severity of postconcussion syndrome (PCS) following mTBI. Objectives: To examine acute anisotropic diffusion properties based on DTI in youth with mTBI relative to orthopedic controls and to examine associations between white matter (WM) integrity and PCS symptoms. Methods: Interim analysis of a prospective casecontrol cohort involving 12 youth ages 11-16 years with mTBI and 10 orthopedic controls requiring extremity radiographs. Data collected in ED included demographics, clinical information, and PCS symptoms measured by the postconcussion symptom scale. Within 72 hours of injury, symptoms were re-assessed and a 61-direction, diffusion weighted, spin-echo imaging scan was performed on a 3T Philips scanner. DTI images were analyzed using tract-based spatial statistics. Fractional anisotropy (FA), mean diffusivity (MD), axial diffusivity (AD), and radial diffusivity were measured. Results: There were no group demographic differences between mTBI cases and controls. Presenting symptoms within the mTBI group included GCS = 15 83%, loss of consciousness 33%, amnesia 33%, post-traumatic seizure 8%, headache 83%, vomiting 33%, dizziness 42%, and confusion 42%. PCS symptoms were greater in mTBI cases than in the controls at ED visit (30.1 ± 17.0 vs. 15.5 ± 16.8, p < 0.06) and at the time of scan (19.1 ± 12.9 vs. 5.7 ± 6.5, p < 0.01). The mTBI group displayed decreased FA in cerebellum and increased MD and AD in the cerebral WM relative to controls (uncorrected p < 0.05). Increased FA in cerebral WM was also observed in mTBI patients but the group difference was not significant. PCS symptoms at the time of the scan were positively correlated with FA and inversely correlated with RD in extensive cerebral WM areas (p < 0.05, uncorrected). In addition, PCS symptoms in mTBI patients were also found to be inversely correlated with MD, AD, and RD in cerebellum (p < 0.05). Conclusion: DTI detected axonal damage in youth with mTBI which correlated with PCS symptoms. DTI performed acutely after injury may augment detection of injury and help prediction of those with worse outcomes. Background: Sports-related concussion among professional, collegiate, and more recently high school athletes has received much attention from the media and medical community. To our knowledge, there is a paucity of research in regard to sports-related concussion in younger athletes. Objectives: The aim of this study was to evaluate parental knowledge of concussion in young children who participate in recreational tackle football. Methods: Parents/legal guardians of children aged 5-15 years enrolled in recreational tackle football were asked to complete an anonymous questionnaire based on the CDC's Heads Up: Concussion In Youth Sports quiz. Parents were asked about their level of agreement in regard to statements that represent definition, symptoms, and treatment of concussion. Results: A total of 310 out of 369 parents voluntarily completed the questionnaire (84% response rate). Parent and child demographics are listed in Table 1 . Ninety four percent of parents believed their child had never suffered a concussion. However, when asked to agree or disagree with statements addressing various aspects of concussion, only 13% (n = 41) could correctly identify all seven statements. Most did not identify that a concussion is considered a mild traumatic brain injury and can be achieved from something other than a direct blow to the head. Race, sex, and zip code had no significant association with correctly answering statements. Education (0.24; p < 0.01) and number of years the child played (0.11; p < 0.05) had a small effect. Fifty-three percent of parents reported someone had discussed the definition of concussion with them and 58% the symptoms of concussion. See Table 2 for source of information to parents. No parent was able to classify all symptoms listed as correctly related or not related to concussion. However, identification of correct concussion definitions correlated with identification of correct symptoms (0.25; p < 0.05). Conclusion: While most parents had received some education regarding concussion from a health care provider, important misconceptions remain among parents of young athletes regarding the definition, symptoms, and treatment of concussion. This study highlights the need for health care providers to increase educational efforts among parents of young athletes in regard to concussion. Figure 1 ). 2/2 (100%) of patients with baseline liver dysfunction were 25(OH)D deficient and 5/6 (83%) of deaths were patients who had insufficient levels of 25(OH)D. There was an inverse association between 25(OH)D level and TNF-a (p = 0.03; Figure 2 ) and IL-6 (p = 0.04). Background: Fever is common in the emergency department (ED), and 90% of those diagnosed with severe sepsis present with fever. Despite data suggesting that fever plays an important role in immunity, human data conflict on the effect of antipyretics on clinical outcomes in critically ill adults. Objectives: To determine the effect of ED antipyretic administration on 28-day in-hospital mortality in patients with severe sepsis. Methods: Single-center, retrospective observational cohort study of 171 febrile severe sepsis patients presenting to an urban academic 90,000-visit ED between June 2005 and June 2010. All ED patients meeting the following criteria were included: age ‡ 18, temperature ‡ 38.3°C, suspected infection, and either systolic blood pressure £ 90 mmHg after a 30 mL/kg fluid bolus or lactate of ‡ 4. Patients were excluded for a history of cirrhosis or acetaminophen allergy. Antipyretics were defined as acetaminophen, ibuprofen, or ketorolac. Results: One hundred-thirty five (78.9%) patients were treated with an antipyretic medication (89.4% acetaminophen). Intubated patients were less likely to receive antipyretic therapy (51.9% vs. 84.0%, p < 0.01), but the groups were otherwise well matched. Patients requiring ED intubation (n = 27) had much higher in-hospital mortality (51.9% vs. 7.6%, p < 0.01). Patients given an antipyretic in the ED had lower mortality (11.9% vs. 25.0%, p < 0.05). When multivariable logistic regression was used to account for APACHE-II, intubation status, and fever magnitude, antipyretic therapy was not associated with mortality (adjusted OR 0.97, 0.31-3.06, p = 0.96). Conclusion: Although patients treated with antipyretic therapy had lower 28-day in-hospital mortality, antipyretic therapy was not independently associated with mortality in multivariable regression analysis. These findings are hypothesis-generating for future clinical trials, as the role of fever control has been largely unexplored in severe sepsis (Grant UL1 RR024992, NIH-NCRR). , and caval index )0.09 ± 0.14 (CI )0.14, )0.05) and all were statistically significant. The groups receiving 10 ml/kg and 30 ml/kg had statistically significant changes in caval index; however the 30 ml/kg group had no significant change in mean IVC diameter. One-way ANOVA differences between the means of all groups were not statistically different. Conclusion: Overall, there were statistically significant differences in mean IVC-US measurements before and after fluid loading, but not between groups. Fasting asymptomatic subjects had a wide inter-subject variation in both baseline IVC-US measurements and fluid-related changes. The wide differences within our 30 ml/kg group may limit conclusions regarding proportionality. There were significant differences in performance on ED measures by ownership (P < 0.0001) and region (P = 0.0002). Scores on ED process measures were highest at for-profit hospitals (27% above average) and hospitals in the south (5% above average), and lowest at public hospitals (16% below average) and hospitals in the northeast (8% below average). Conclusion: There was considerable variation in performance on the ED measures included in the VBP program by hospital ownership and region. ED directors may come under increasing pressure to improve scores in order to reduce potential financial losses under the program. Our data provide early information on the types of hospitals with the greatest opportunity for improvement. Methods: Design/Setting -An independent agency mandated by the government collected and analyzed ED patient experience data using a comprehensive, validated multidimensional instrument and a random periodic sampling methodology of all ED patients. A prospective pre-post experimental study design was employed in the eight community and tertiary care hospitals most affected by crowding. Two 5.5 month study periods were evaluated (Pre: 28/06-12/12/2010; Post: 13/12/2010-29/ 05/2011). Outcomes -The primary outcome was patient perception of wait times and crowding reported as a composite mean score (0-100) from six survey items with higher scores representing better ratings. The overall rating of care by ED patients (composite score) and other dimensions of care were collected as secondary outcomes. All outcomes were compared using chi-square and two-tailed Student's t-tests. Results: A total of 3774 surveys were completed in both the pre-OCP and post-OCP study periods representing a response rate of 45%. We compared in-patient mortality from AMI for patients who lived in a community with either 2.5 miles or 5 miles of a closure but did not need to travel farther to the nearest ED with those who did not. We used patient-level data from the California Office of Statewide Health and Planning Development (OSHPD) database patient discharge data, and locations of patient residence and hospitals were geo-coded to determine any changes in distance to the nearest ED. We applied a generalized linear mixed effects model framework to estimate a patient's likelihood to die in the hospital of AMI as a function of being affected by a neighborhood closure event. Results Background: Fragmentation of care has been recognized as a problem in the US health care system. However, little is known about ED utilization after hospitalization, a potential marker of poor outpatient care coordination after discharge, particularly for common inpatient-based procedures. Objectives: To determine the frequency and variability in ED visits after common inpatient procedures, how often they result in readmission, and related payments. Methods: Using national Medicare data for 2005-2007, we examined ED visits within 30 days of hospital discharge after six common inpatient procedures: percutaneous coronary intervention, coronary artery bypass grafting (CABG), elective abdominal aortic aneurysm repair, back surgery, hip fracture repair, and colectomy. We categorized hospitals into risk-adjusted quintiles based on the frequency of ED visits after the index hospitalization. We report visits by primary diagnosis ICD-9 codes and rates of readmission. We also assessed payments related to these ED visits. Results: Overall, the highest quintile of hospitals had 30-day ED visit rates that ranged from a low of 17.8% with an associated 7.3% readmission rate (back surgery) to a high of 27.8% with an associated 13.6% readmission rate (CABG). The most variability was more than 3-fold and found among patients undergoing colectomy in which the worst-performing hospitals saw 24.1% of their patients experienced an ED visit within 30 days while the best-performing hospitals saw 7.4%. Average total payments for the 30-day window from initial discharge across all surgical cohorts varied from $18,912 for patients discharged without subsequent ED visit; $20,061for those experiencing an ED visit(s); $38,762 for those readmitted through the ED; and $33,632 for those readmitted from another source. If all patients who did not require readmission also did not incur an ED visit within the 30-day window, this would represent a potential cost savings of $125 million. Conclusion: Among elderly Medicare recipients there was significant variability between hospitals for 30-day ED visits after six common inpatient procedures. The ED visit may be a marker of poor care coordination in the immediate discharge period. This presents an opportunity to improve post-procedure outpatient care coordination which may save costs related to preventable ED visits and subsequent readmissions. Objectives: We sought to assess the effect of pharmacist medication review on ED patient care, in particular time from physician order to medication administration for the patient (order-to-med time). Methods: We conducted a multi-center, before-after study in two EDs (urban academic teaching hospital and suburban community hospital, combined census of 61,000) after implementation of the electronic prospective pharmacy review system (PRS). The system allowed a pharmacist to review all ED medication orders electronically at the time of physician order and either approve or alter the order. We studied a 5-month time period before implementation of the system (pre-PRS, 7/1/10-11/30/11) and after implementation (post-PRS, 7/ 1/11-11/30/11). We collected data on all ED medication orders including dose, route, class, pharmacist review action, time of physician order, and time of medication administration. Differences in order-to-medication between the pre-and post-PRS study periods were compared using a Results: ED metrics that were significantly associated with LBTCs varied across ED patient-volume categories (Table) . For EDs seeing less than 20K patients annually, the percentage of EMS arrivals admitted to the hospital and ED square footage were both weakly associated with LBTCs (p = 0.09). For EDs seeing at least 20K-39K patients, median ED length of stay (LOS), percent of patients admitted to hospital through the ED, percent of EMS arrivals admitted to hospital, and percent of pediatric patients were all positively associated, while percent of patients admitted to the hospital was negatively associated with LBTCs. For EDs seeing 40K-59K, median LOS and percent of x-rays performed were positively associated, while percent of EKGs performed was negatively associated with LBTCs. For EDs seeing 60K-79K, percent of patients admitted to the hospital through the ED was negatively associated and percent of EKGs performed was positively associated with LBTCs. For EDs with volume greater than 80K, none of the selected variables were associated with LBTC. Conclusion: ED factors that help explain high LBTC rates differ depending on the size of an ED. Interventions attempting to improve LBTC rates by modifying ED structure or process will need to consider baseline ED volume as a potential moderating influence. Objectives: Our study sought to compare bacterial growth of samples taken from surfaces after use of a common approved QUAT compound and a virtually non-toxic, commercially available solution containing elemental silver (0.02%), hydrogen peroxide (15%), and peroxyacetic acid (20%) (SHP) in a working ED. We hypothesized that, based on controlled laboratory data available, SHP compound would be more effective on surfaces in an active urban ED. Methods: We cleaned and then sampled three types of surfaces in the ED (suture cart, wooden railing, and the floor) during midday hours one minute after application of tap water, QUAT, and SHP and then again at 24 hours without additional cleaning. Conventional environmental surface surveillance RODAC media plates were used for growth assessment. Images of bacterial growth were quantified at 24 and 48 hours. Standard cleaning procedures by hospital staff were maintained per usual. Results: SHP was superior to control and QUAT one minute after application on all three surfaces. QUAT and water had 10x and 40x more bacterial growth than the surface cleaned with SHP, respectively. 24 hours later, the SHP area produced fewer colonies sampled from the wooden railing: 4x more bacteria for QUAT, and 5x for water when compared to SHP. 24h cultures from the cart and floor had confluent growth and could not be quantified. Conclusion: SHP outperforms QUAT in sterilizing surfaces after one minute application. SHP may be a superior agent as a non-toxic, non-corrosive, and effective agent for surfaces in the demanding ED setting. Further studies should examine sporidical and virucidal properties in a similar environment. Objectives: Evaluate the effect on patient satisfaction of increasing waiting room times and physician evaluation times. Methods: Emergency department flow metrics were collected on a daily basis as well as average daily patient satisfaction scores. The data were from July 2010 through February 2011, in a 44,000 census urban hospital. The data were divided into equal intervals. The arrival to room time was divided by 15 minute intervals up to 135 minutes with the last group being greater than 136 minutes. The physician evaluation times were divided into 20 minute intervals, up to 110, the last group greater than 111 with 46 days in the group. Data were analyzed using means and standard deviations, and well as ANOVA for comparison between groups. Results: The overall satisfaction score for the outpatient emergency visit was higher when the patient was in a room within 15 minutes of arrival (88.4, std deviation 5.9), analysis of variation between the groups had a p = 0.13, for the means of each interval (see table 1 ). The total satisfaction with the visit as well as satisfaction with the provider dropped when the evaluation extended over 110 minutes, but was not statistically significant on ANOVA analysis (see table 2 for means). Conclusion: Once a patient's time in the waiting room extends beyond 15 minutes, you have lost a significant opportunity for patient satisfaction; once they have been in the waiting room for over 120 minutes, you are also much more likely to receive a poor score. Physician evaluation time scores are much more consistent but as longer evaluation times occurred beyond total of 110 minutes we started to see a trend downward in the satisfaction score. Results: In all three EDs, pain medication rates (both in ED and Rx) varied significantly by clinical factors including location of pain, discharge diagnosis, pain level, and acuity. We observed little to no variation in pain medication rates by patient factors such as age, sex, race, insurance, or prior ED visits. The table displays key pain management practices by site and provider. After adjusting for patient and clinical characteristics, significant differences in pain medication rates remained by provider and site (see figure) . Conclusion: Within this health system, the approach to pain management by both providers and sites is not standardized. Investigation of the potential effect of this variability on patient outcomes is warranted. Results: All measures showed significant differences, p < 0.01. Average pts/h decreased post-CPOE and did not recover post transitional period, 1.92 ± 0.13 vs 1.75 ± 0.11, p < 0.05. RVU/h also decreased post-CPOE and did not recover post transitional period, 5.23 ± 0.37 vs 4.79 ± 0.32 and 4.82 ± 0.33, p < 0.05. Charges/h also decreased after CPOE implementation and did not recover after system optimization. There was a sustained significant decrease in charges/h of 4.5% ± 6.5% post CPOE and 3.6% ± 6.4% post optimization, p < 0.05. Sub-group analysis for each provider group was also evaluated and showed variability for different providers. Conclusion: There was a significant decrease in all productivity metrics four months after the implementation of CPOE. The system did undergo optimization initiated by providers with customization for ease and speed of use. However, productivity measurements did not recover after these changes were implemented. These data show that with the implementation of a CPOE system there is a decrease in productivity that continues even after a transition period and system customization. Background: Procedural competency is a key component of emergency medicine residency training. Residents are required to log procedures to document quantity of procedures and identify potential weaknesses in their training. As emergency medicine evolves, it is likely that the type and number of procedures change over time. Also, exposure to certain rare procedures in residency is not guaranteed. Objectives: We seek to delineate trends in type and volume of core EM procedures over a decade of emergency medicine residents graduating from an accredited four-year training program. Methods: Deidentified procedure logs from 2003-2011 were analyzed to assess trends in type and quantity of procedures. Procedure logs were self-reported by individual residents on a continuous basis during training onto a computer program. Average numbers of procedures per resident in each graduating class were noted. Statistical analysis was performed using SPSS and includes a simple linear regression to evaluate for significant changes in number of procedures over time and an independent samples two-tailed t-test of procedures performed before and after the required resident duty hours change. Results: A total of 112 procedure logs were analyzed and the frequency of 29 different procedures was evaluated. A significant increase was seen in one procedure, the venous cutdown. Significant decreases were seen in 12 procedures including key procedures such as central venous catheters, tube thoracostomy, and procedural sedation. The frequency of five high-stakes/ resuscitative procedures, including thoracotomy and cricothyroidotomy, remained steady but very low (<4 per resident over 4 years). Of the remaining 11 procedures, 8 showed a trend toward decreased frequency, while only 5 increased. Conclusion: Over the past 9 years, EM residents in our program have recorded significantly fewer opportunities to perform most procedures. Certain procedures in our emergency medicine training program have remained stable but uncommon over the course of nearly a decade. To ensure competency in uncommon procedures, innovative ways to expose residents to these potentially life saving skills must be considered. These may include practice on high-fidelity simulators, increased exposure to procedures on patients during residency (possibly on off-service rotations), or practice in cadaver and animal labs. Objectives: To study the effectiveness of a unique educational intervention using didactic and hands-on training in USGPIV. We hypothesized that senior medical students would improve performance and confidence with USGPIV after the simulation training. Methods: Fourth year medical students were enrolled in an experimental, prospective, before and after study conducted at a university medical school simulation center. Baseline skills in participant's USGPIV on simulation vascular phantoms were graded by ultrasound expert faculty using standardized checklists. The primary outcome was time to cannulation, and secondary outcomes were ability to successfully cannulate, number of needle attempts, and needle-tip visualization. Subjects then observed a 15-minute presentation on correct performance of USGPIV followed by a 30-minute hands-on practical session using the vascular simulators with a 1:4 to 1:6 ultrasound instructor to student ratio. An expert blinded to the participant's initial performance graded post-educational intervention USGPIV ability. Pre-and post-intervention surveys were obtained to evaluate USGPIV confidence, previous experience with ultrasound, peripheral IV access, USG-PIV, and satisfaction with the educational format. Objectives: This study examines the grade distribution of resident evaluations when the identity of the evaluator was anonymous as compared to when the identity of the evaluator was known to the resident. We hypothesize that there will be no change in the grades assigned to residents. Methods: We retrospectively reviewed all faculty evaluations of residents and grades assigned from July 1, 2008 through November 15, 2011. Prior to July 1, 2010 the identity of the faculty evaluators was anonymous, while after this date, the identity of the faculty evaluators was made known to the residents. Throughout this time period, residents were graded on a five-point scale. Each resident evaluation included grades in the six ACGME core competencies as well as in select other abilities. Specific abilities evaluated varied over the dates analyzed. Evaluations of residents were assigned to two groups, based on whether the evaluator was anonymous or made known to the resident. Grades were compared between the two groups. Results: A total of 10,760 grades were assigned in the anonymous group, with an average grade of 3.90 (95CI 3.88, 3.91). A total of 7,122 grades were assigned in the known group with an average grade of 3.77 (95CI 3.75, 3.79). Specific attention was paid to assignment of unsatisfactory grades (1 or 2 on the five-point scale). The anonymous group assigned 355 grades in this category, comprising 3.3% of all grades assigned. The known group assigned 100 grades in this category, comprising 1.4% of all grades assigned. Unsatisfactory grades were assigned by the anonymous group 1.9% (95CI 1.5, 2.3) more often. Additionally, 5.8% (95CI 3.8, 6.8) fewer exceptional grades (4 or 5 on the five-point scale) were assigned by the anonymous group. Conclusion: The average grade assigned was closer to average (3 on a five-point scale) when the identity of the evaluator was made known to the residents. Additionally, fewer unsatisfactory and exceptional grades were assigned in this group. This decrease of both unsatisfactory and exceptional grades may make it more difficult for program directors to effectively identify struggling and strong residents respectively. Testing to Improve Knowledge Retention from Traditional Didactic Presentations: A Pilot Study David Saloum, Amish Aghera, Brian Gillett Maimonides Medical Center, Brooklyn, NY Background: The ACGME requires an average of at least 5 hours of planned educational experiences each week for EM residents, which traditionally consists of formal lecture based instruction. However, retention by adult learners is limited when presented material in a lecture format. More effective methods such as small group sessions, simulation, and other active learning modalities are time-and resource-intensive and therefore not practical as a primary method of instruction. Thus, the traditional lecture format remains heavily relied upon. Efficient strategies to improve the effectiveness of lectures are needed. Testing utilized as a learning tool to force immediate recall of lecture material is an example of such a strategy. Objectives: To evaluate the effect of immediate postlecture short answer quizzes on EM residents' retention of lecture content. Methods: In this prospective randomized controlled study, EM residents from a community based 3-year training program were randomized into two groups. Block randomization provided a similar distribution of postgraduate year training levels and performance on both US-MLE and in-training examinations between the two groups. Each group received two identical 50-minute lectures on ECG interpretation and aortic disease. One group of residents completed a five-question short answer quiz immediately following each lecture (n = 13), while the other group received the lectures without subsequent quizzes (n = 16). The quizzes were not scored or reviewed with the residents. Two weeks later, retention was assessed by testing both groups with a 20-question multiple choice test (MCT) derived in equal part from each lecture. Mean and median test results were then compared between groups. Statistical significance was determined using a paired t-test of median test scores from each group. Results: Residents who received immediate post-lecture quizzes demonstrated significantly higher MCT scores (mean = 57%, median 58%, n = 10) compared to those receiving lectures alone (mean = 48%, median = 50%, n = 15); p = 0.023. Conclusion: Short answer testing immediately after a traditional didactic lecture improves knowledge retention at a 2-week interval. Limitations of the study are that it is a single center study and long term retention was not assessed. Background: The task of educating the next generation of physicians is steadily becoming more difficult with the inherent obstacles that exist for faculty educators and the work hour restrictions that students must adhere to. The obstacles make developing curricula that not only cover important topics but also do so in a fashion that helps support and reinforce the clinical experiences very difficult. Several areas of medical education are using more asynchronous techniques and self-directed online educational modules to overcome these obstacles. Objectives: The aim of this study was to demonstrate that educational information pertaining to core pediatric emergency medicine topics could be as effectively disseminated to medical students via self-directed online educational modules as it could through traditional didactic lectures. Methods: This was a prospective study conducted from August 1, 2010 through December 31, 2010. Students participating in the emergency medicine rotation at Carolinas Medical Center were enrolled and received education in a total of eight core concepts. The students were divided into two groups which changed on a monthly basis. Group 1 was taught four concepts via self-directed online modules and four traditional didactic lectures. Group 2 was taught the same core concepts, but in opposite fashion to Group 1. Each student was given a pre-test, post-test, and survey at the conclusion of the rotation. Results: A total of 28 students participated in the study. Students, regardless of which group assigned, performed similarly on the pre-test, with no statistical difference among scores. When looking at the summative total scores between online and traditional didactic lectures, there was a trend towards significance for more improvement among those taught online. The student's assessment of the online modules showed that the majority either felt neutral or preferred the online method. The majority thought the depth and length of the modules were perfect. Most students thought having access to the online modules was valuable and all but one stated that they would use them again. Conclusion: This study demonstrates that self-directed, online educational modules are able to convey important concepts in emergency medicine similar to traditional didactics. It is an effective learning technique that offers several advantages to both the educator and student. Background: Critical access hospitals (CAH) provide crucial emergency care to rural populations that would otherwise be without ready access to health care. Data show that many CAH do not meet standard adult quality metrics. Adults treated at CAH often have inferior outcomes to comparable patients cared for at other community-based emergency departments (EDs). Similar data do not exist for pediatric patients. Objectives: As part of a pilot project to improve pediatric emergency care at CAH, we sought to determine whether these institutions stock the equipment and medications necessary to treat any ill or injured child who presents to the ED. Methods: Five North Carolina CAH volunteered to participate in an intensive educational program targeting pediatric emergency care. At the initial site visit to each hospital, an investigator, in conjunction with the ED nurse manager, completed a 109-item checklist of commonly required ED equipment and medications based on the 2009 ACEP ''Guidelines for Care of Children in the Emergency Department''. The list was categorized into monitoring and respiratory equipment, vascular access supplies, fracture and trauma management devices, and specialized kits. If available, adult and pediatric sizes were listed. Only hospitals stocking appropriate pediatric sizes of an item were counted as having that item. The pharmaceutical supply list included antibiotics, antidotes, antiemetics, antiepileptics, intubation and respiratory medications, IV fluids, and miscellaneous drugs not otherwise categorized. Results: Overall, the hospitals reported having 91% of the items listed (range 87-96%). The two greatest deficiencies were fracture devices (range 33-66%), with no hospital stocking infant-sized cervical collars, and antidotes, with no hospital stocking Pralidoxime, 1/5 hospitals stocking Fomepizole, and 2/5 hospitals stocking pyridoxine and methylene blue. Only one of the five institutions had access to Prostaglandin E. The hospitals stated cost and rarity of use as the reason for not stocking these medications. Conclusion: The ability of CAH to care for pediatric patients does not appear to be hampered by a lack of equipment. Ready access to infrequently used, but potentially lifesaving, medications is a concern. Tertiary care centers preparing to accept these patients should be aware of these potential limitations as transport decisions are made. Background: While incision and drainage (I&D) alone has been the mainstay of management of uncomplicated abscesses for decades, some advocate for adjunct antibiotic use, arguing that available trials are underpowered and that antibiotics reduce treatment failures and recurrence. Objectives: To investigate the role of antibiotics in addition to I&D in reducing treatment failure as compared to management with I&D alone. Methods: We performed a search using MEDLINE, EMBASE, Web of Knowledge, and Google Scholar databases (with a medical librarian) to include trials and observational studies analyzing the effect of antibiotics in human subjects with skin and soft-tissue abscesses. Two investigators independently reviewed all the records. We performed three overlapping meta-analy-ses: 1. Only randomized trials comparing antibiotics to placebo on improvement of the abscess during standard follow-up. 2. Trials and observational studies comparing appropriate antibiotics to placebo, no antibiotics, or inappropriate antibiotics (as gauged by wound culture) on improvement during standard follow-up. 3. Only trials, but broadened outcome to include recurrence or new lesions during a longer follow-up period as treatment failure. We report pooled risk ratios (RR) using a fixed-effects model for our point estimates with Shore-adjusted 95% confidence intervals (CI). Results: We screened 1,937 records, of which 12 studies fit inclusion criteria, 9 of which were meta-analyzed (5 trials, 4 observational studies) because they reported results that could be pooled. Of the 9 studies, 5 enrolled subjects from the ED, 2 from a soft-tissue infection clinic, and 2 from a general hospital without definition of enrollment site. Five studies enrolled primarily adults, 3 pediatrics, and 1 without specification of ages. After pooling results for all randomized trials only, the RR = 1.03 (95% CI: 0.97-1.08). Exposure being ''appropriate'' antibiotics (using trials and observational studies) resulted in a pooled RR = 1.01 (95% CI: 0.98-1.03). When we broadened our treatment failure criteria to include recurrence or new lesions at longer lengths of follow-up (trials only), we noted a RR = 1.05 (95% CI: 0.97-1.15). Conclusion: Based on available literature pooled for this analysis, there is no evidence to suggest any benefit from antibiotics in addition to I&D in the treatment of skin and soft tissue abscesses. (Originally submitted as a ''late-breaker.'') Primary Objectives: To compare wound healing and recurrence rates after primary vs. secondary closure of drained abscesses. We hypothesized the percentage of drained ED abscesses that would be completely healed at 7 days would be higher after primary closure. Methods: This randomized clinical trial was undertaken in two academic emergency departments. Immunocompetent adult patients with simple, localized cutaneous abscesses were randomly assigned to I & D followed by primary or secondary closure. Randomization was balanced by center, with an allocation sequence based on a block size of four, generated by a computer random number generator. The primary outcome was percentage of healed wounds seven days after drainage. A sample of 50 patients had 80% power to detect an absolute difference of 40% in healing rates assuming a baseline rate of 25%. All analyses were by intention to treat. Results: Twenty-seven patients were allocated to primary and 29 to secondary closure, of whom 23 and 27, respectively, were followed to study completion. Healing rates at seven days were similar between the primary and secondary closure groups ( We compared 100 consecutive patients each scanned on the 64 or 320 slice CCTA in 2010-2011. Measures and outcomes-Data were prospectively collected using standardized data collection forms required prior to performing CCTA. The main outcomes were cumulative radiation doses and volumes of intravenous contrast. Data Analysis-Groups compared with t-, Mann Whitney U, and chi-square tests. Results: The mean age of patients imaged with the 64 and 320 scanners were 49 (SD 10) vs. 51 (13) (P = 0.27). Male:female ratios were also similar (57:43 vs. 51:49 respectively, P = 0.40). Both mean (P < 0.001) and median (P = 0.006) effective radiation dose were significantly lower with the 320 (6.8 and 6 mSv) vs. the 64-slice scanner (12.2 and 10 mSv) respectively. Prospective gating was successful in 100% of the 320 scans and only in 38% of the 64 scans (P < 0.001). Mean IV contrast volumes were also lower for the 320 vs. the 64-slice scanner (74 ± 10 vs. 96 ± 12 ml; P < 0.001). The % non-diagnostic scans was similarly low in both scanners (3% each). There were no differences in use of beta-blockers or nitrates. Conclusion: When compared with the 64-slice scanner, the 320-slice scanner reduces the effective radiation doses and IV contrast volumes in ED patients with CP undergoing CCTA. Need for beta-blockers and nitrates was similar and both scanners achieved excellent diagnostic image quality. Background: A few studies have demonstrated that bedside ultrasound measurement of inferior vena cava to aorta (IVC-to-Ao) ratio is associated with the level of dehydration in pediatric patients and a proposed cutoff of 0.8 has been suggested, below which a patient is considered dehydrated. Objectives: We sought to externally validate the ability of IVC-to-Ao ratio to discriminate dehydration and the proposed cutoff of 0.8 in an urban pediatric emergency department (ED). Methods: This was a prospective observational study at an urban pediatric ED. We included patients aged 3 to 60 months with clinical suspicion of dehydration by the ED physician and an equal number of control patients with no clinical suspicion of dehydration. We excluded children who were hemodynamically unstable, had chronic malnutrition or failure to thrive, open abdominal wounds, or were unable to provide patient or parental consent. A validated clinical dehydration score (CDS) (range 0 to 8) was used to measure initial dehydration status. An experienced sonographer blinded to the CDS and not involved in the patient's care measured the IVC-to-Ao ratio on the patient prior to any hydration. CDS was collapsed into a binary outcome of no dehydration or any level of dehydration (1 or higher). The ability of IVC-to-Ao ratio to discriminate dehydration was assessed using area under the receiver operating characteristic curve (AUC) and the sensitivity and specificity of IVC-to-Ao ratio was calculated for three cutoffs (0.6, 0.8, 1.0). Calculation of AUC was repeated after adjusting for age and sex. Results: 92 patients were enrolled, 39 (42%) of whom had a CDS of 1 or higher. Median age was 28 (interquartile range 16-39) months, and 53 (58%) were female. The IVCto-Ao ratio showed an unadjusted AUC of 0.66 (95% CI 0.54-0.77) and adjusted AUC of 0.67 (95% CI 0.56-0.79). For a cutoff of 0.6 sensitivity was 26% (95% CI 13%-42%) and specificity 92% (95% CI 82%-98%); for a cutoff of 0.8 sensitivity was 51% (95% CI 35%-68%) and specificity 74% (95% CI 60%-85%); for a cutoff of 1.0 sensitivity was 79% (95% CI 64%-91%) and specificity 40% (95% CI 26%-54%). Conclusion: The ability of the IVC-to-Ao ratio to discriminate dehydration in young pediatric ED patients was modest and the cutoff of 0.8 was neither sensitive nor specific. Background: While early cardiac computed tomographic angiography (CCTA) could be more effective to manage emergency department (ED) patients with acute chest pain and intermediate (>4%) risk of acute coronary syndrome (ACS) than current management strategies, it also could result in increased testing, cost, and radiation exposure. Objectives: The purpose of the study was to determine whether incorporation of CCTA early in the ED evaluation process leads to more efficient management and earlier discharge than usual care in patients with acute chest pain at intermediate risk for ACS. Methods: Randomized comparative effectiveness trial enrolling patients between 40-75 years of age without known CAD, presenting to the ED with chest pain but without ischemic ECG changes or elevated initial troponin and require further risk stratification for decision making, at nine US sites. Patients are being randomized to either CCTA as the first diagnostic test or to usual care, which could include no testing or functional testing such as exercise ECG, stress SPECT, and stress echo following serial biomarkers. Test results were provided to physicians but management in neither arm was driven by a study protocol. Data on time, diagnostic testing, and cost of index hospitalization, and the following 28 days are being collected. The primary endpoint is length of hospital stay (LOS). The trial is powered to allow for detection of a difference in LOS of 10.1 hours between competing strategies with 95% power assuming that 70% of projected LOS values are true. Secondary endpoints are cumulative radiation exposure, and cost of competing strategies. Tertiary endpoints are institutional, caregiver, and patient characteristics associated with primary and secondary outcomes. Rate of missed ACS within 28 days is the safety endpoint. Results: As of November 21st, 2011, 880 of 1000 patients have been enrolled (mean age: 54 ± 8, 46.5% female, ACS rate 7.55%). The anticipated completion of the last patient visit is 02/28/12 and the database will be locked in early March 2012. We will present the results of the primary, secondary, and some tertiary endpoints for the entire cohort. Conclusion: ROMICAT II will provide rigorous data on whether incorporation of CCTA early in the ED evaluation process leads to more efficient management and triage than usual care in patients with acute chest pain at intermediate risk for ACS. (Originally submitted as a ''late-breaker.'') Meta Background: Many studies have documented higher rates of advanced radiography utilization across U.S. emergency departments (EDs) in recent years, with an associated decrease in diagnostic yield (positive tests / total tests). Provider-to-provider variability in diagnostic yield has not been well studied, nor have the factors that may explain these differences in clinical practice. Objectives: We assessed the physician-level predictors of diagnostic yield using advanced radiography to diagnose pulmonary embolus (PE) in the ED, including demographics and D-dimer ordering rates. Methods: We conducted a retrospective chart review of all ED patients who had a CT chest or V/Q scan ordered to rule out PE from 1/06 to 12/09 in four hospitals in the Medstar health system. Attending physicians were included in the study if they had ordered 50 or more scans over the study period. The result of each CT and VQ scan was recorded as positive, negative, or indeterminate, and the identity of the ordering physician was also recorded. Data on provider sex, residency type (EM or other), and year of residency completion were collected. Each provider's positive diagnostic yield was calculated, and logistic regression analysis was done to assess correlation between positive scans and provider characteristics. Results: During the study period, 15,015 scans (13,571 CTs and 1,443 V/Qs) were ordered by 93 providers. The physicians were an average of 9.7 years from residency, 36% were female, and 98% were EM-trained. Diagnostic yield varied significantly among physicians (p < 0.001), and ranged from 0% to 18%. The median diagnostic yield was 5.9% (IQR 3.8%-7.8%). The use of D-dimer by provider also varied significantly from 4% to 48% (p < 0.001). The odds of a positive test were significantly lower among providers less than 10 years out of residency graduation (OR 0.80, CI 0.68-0.95) after controlling for provider sex, type of residency training, D-dimer use, and total number of scans ordered. Conclusion: We found significant provider variability in diagnostic yield for PE and use of D-dimer in this study population, with 25% of providers having diagnostic yield less than or equal to 3.8%. Providers who were more recently graduated from residency appear to have a lower diagnostic yield, suggesting a more conservative approach in this group. Background: The literature reports that anticoagulation increases the risk of mortality in patients presenting to emergency departments (ED) with head trauma (HT). It has been suggested that such patients should be treated in a protocolized fashion, including CT within 15 minutes, and anticipatory preparation of FFP before CT results are available. There are significant logistical and financial implications associated with implementation of such a protocol. Objectives: Our primary objective was to determine the effect of anticoagulant therapy on the risk of intracranial hemorrhage (ICH) in elderly patients presenting to our urban community hospital following bunt head injury. Methods: This was a retrospective chart review study of HT patients >60 years of age presenting to our ED over a 6-month period. Charts reviewed were identified using our electronic medical record via chief complaints and ICD-9 codes and cross referencing with written CT logs. Research assistants underwent review of at least 25% of their contributing data to validate reliability. We collected information regarding use of warfarin, clopidogrel, and aspirin and CT findings of ICH. Using univariate logistic regression, we calculated odds ratios (OR) for ICH with 95% CI. Results: We identified 363 elderly HT patients. The mean age of our population was 72, 34 (8.3%) admitted to using anticoagulant therapy, and 23% were on antiplatelet drugs. 14 (3.8%) of the cohort had ICB, 3 patients required neurosurgical intervention, and 1 had transfusion of blood products. Of the non-anticoagulated patients, 12 (3.6%) were found to have ICH, half of those (6) , and miR-223) were measured using real-time quantitative PCR from serum drawn at enrollment. IL-6, IL-10, and TNF-a were measured using a Bio-Plex suspension system. Baseline characteristics, IL-6, IL-10, TNF-a and microRNAs were compared using one way ANOVA or Fisher exact test, as appropriate. Correlations between miRNAs and SOFA scores, IL-6, IL-10, and TNF-a were determined using Spearman's rank. A logistic regression model was constructed using in-hospital mortality as the dependent variable and miRNAs as the independent variables of interest. Bonferroni adjustments were made for multiple comparisons. Results: Of 93 patients, 24 were controls, 29 had sepsis, and 40 had septic shock. We found no difference in serum miR-146a or miR-223 between cohorts, and found no association between these microRNAs and either inflammatory markers or SOFA score. miR-150 demonstrated a significant correlation with SOFA score (q = 0.31, p = 0.01), IL-10 (q = 0.37, p = 0.001), but not IL-6 or TNF-a (p = 0.046, p = 0.59). Logistic regression demonstrated miR-150 to be associated with mortality, even after adjusting for SOFA score (p = 0.003). Conclusion: miR-146a or miR-223 failed to demonstrate any diagnostic or prognostic ability in this cohort. miR-150 was associated with inflammation, increasing severity of illness, and mortality, and may represent a novel prognostic marker for diagnosis and prognosis of sepsis. Objectives: To examine the association between emergency physician recognition of SIRS and sepsis and subsequent treatment of septic patients. Methods: A retrospective cohort study of all-age patient medical records with positive blood cultures drawn in the emergency department from 11/2008-1/ 2009 at a Level I trauma center. Patient parameters were reviewed including vital signs, mental status, imaging, and laboratory data. Criteria for SIRS, sepsis, severe sepsis, and septic shock were applied according to established guidelines for pediatrics and adults. These data were compared to physician differential diagnosis documentation. The Mann-Whitney test was used to compare time to antibiotic administration and total volume of fluid resuscitation between two groups of patients: those with recognized sepsis and those with unrecognized sepsis. Results: SIRS criteria were present in 233/338 reviewed cases. Sepsis criteria were identified in 215/338 cases and considered in the differential diagnosis in 121/215 septic patients. Severe sepsis was present in 89/338 cases and septic shock was present in 42/338 cases. The sepsis 6-hour resuscitation bundle was completed in the emergency department in 16 cases of severe sepsis or septic shock. 121 patients who met sepsis criteria and were recognized by the ED physician had a median time to antibiotics of 150 minutes (IQR: 89-282) and a median IVF of 1500 ml (IQR: 500-3000). The 94 patients who met sepsis criteria but went unrecognized in the documentation had a median time to antibiotics of 225 minutes (IQR: 135-355) and median volume of fluid resuscitation of 1000 ml (IQR: . Median time to antibiotics and median volume of fluid resuscitation differed significantly between recognized and unrecognized septic patients (p = 0.003 and p = 0.002, respectively). Conclusion: Emergency physicians correctly identify and treat infection in most cases, but frequently do not document SIRS and sepsis. Lack of documentation of sepsis in the differential diagnosis is associated with increased time to antibiotic delivery and a smaller total volume of fluid administration, which may explain poor sepsis bundle compliance in the emergency department. Background: Severe sepsis is a common clinical syndrome with substantial human and financial impact. In 1992 the first consensus definition of sepsis was published. Subsequent epidemiologic estimates were collected using administrative data, but ongoing discrepancies in the definition of severe sepsis led to large differences in estimates. Objectives: We seek to describe the variations in incidence and mortality of severe sepsis in the US using four methods of database abstraction. Methods: Using a nationally representative sample, four previously published methods (Angus, Martin, Dombrovskiy, Wang) were used to gather cases of severe sepsis over a 6-year period (2004) (2005) (2006) (2007) (2008) (2009) . In addition, the use of new ICD-9 sepsis codes was compared to previous methods. Our main outcome measure was annual national incidence and in-hospital mortality of severe sepsis. Results: The average annual incidence varied by as much as 3.5 fold depending on method used and ranged from 894,013 (300 / 100,000 population) to 3,110,630 (1,031 / 100,000) using the methods of Dombrovskiy and Wang, respectively. Average annual increase in the incidence of severe sepsis was similar (13.0-13.3%) across all methods. Total mortality mirrored the increase in incidence over the 6-year period ( Background: Radiation exposure from medical imaging has been the subject of many major journal articles, as well as the topic of mainstream media. Some estimate that one-third of all CT scans are not medically justified. It is important for practitioners ordering these scans to be knowledgeable of currently discussed risks. Objectives: To compare the knowledge, opinions, and practice patterns of three groups of providers in regards to CTs in the ED. Methods: An anonymous electronic survey was sent to all residents, physician assistants, and attending physicians in emergency medicine (EM), surgery, and internal medicine (IM) at a single academic tertiary care referral Level I trauma center with an annual ED volume of over 160,000 visits. The survey was pilot tested and validated. All data were analyzed using the Pearson's chi-square test. Results: There was a response rate of 32% (220/668). Data from surgery respondents were excluded due to a low response rate. In comparison to IM, EM respondents correctly equated one abdominal CT to between 100 and 500 chest x-rays, reported receiving formal training regarding the risks of radiation from CTs, believe that excessive medical imaging is associated with an increased lifetime risk of cancer, and routinely discuss the risks of CT imaging with stable patients more often (see Table 1 ). Particular patient factors influence whether radiation risks are discussed with patients by 60% in each specialty (see Table 2 ). Before ordering an abdominal CT in a stable patient, IM providers routinely review the patient's medical imaging history less often than EM providers surveyed. Overall, 67% of respondents felt that ordering an abdominal CT in a stable ED patient is a clinical decision that should be discussed with the patient, but should not require consent. Conclusion: Compared with IM, EM practitioners report greater awareness of the risks of radiation from CTs and discuss risks with patients more often. They also review patients' imaging history more often and take this, as well as patients' age, into account when ordering CTs. These results indicate a need for improved education for both EM and IM providers in regards to the risks of radiation from CT imaging. Background: In Nebraska, 80% of emergency departments have annual visits less than 10,000, and the predominance are in rural settings. General practitioners working in rural emergency departments have reported low confidence in several emergency medicine skills. Current staffing patterns include using midlevels as the primary provider with non-emergency medicine trained physicians as back-up. Lightly-embalmed cadaver labs are used for resident's procedural training. Objectives: To describe the effect of a lightlyembalmed cadaver workshop on physician assistants' (PA) reported level of confidence in selected emergency medicine procedures. Methods: An emergency medicine procedure lab was offered at the Nebraska Association of Physician Assistants annual conference. Each lab consisted of a 2-hour hands-on session teaching endotracheal intubation techniques, tube thoracostomy, intraosseous access, and arthrocentesis of the knee, shoulder, ankle, and wrist to PAs. IRB-approved surveys were distributed pre-lab and a post-lab survey was distributed after lab completion. Baseline demographic experience was collected. Pre-and post-lab procedural confidence was rated on a six-point Likert scale (1-6) with 1 representing no confidence. The Wilcoxon Signed-Rank Test was use to calculate p values. Results: 26 PAs participated in the course. All completed a pre-and post-lab assessment. No PA had done any one procedure more than 5 times in their career. Pre-lab modes of confidence level were £3 for each procedure. Post-lab modes were >4 for each procedure except arthrocentesis of the ankle and wrist. However, post lab assessments of procedural confidence significantly improved for all procedures with p values <0.05. Conclusion: Midlevel providers' level of confidence improved for emergent procedures after completion of a procedure lab using lightly-embalmed cadavers. A mobile cadaver lab would be beneficial to train rural providers with minimal experience. Background: Use of automated external defibrillators (AED) improves survival in out-of-hospital cardiopulmonary arrest (OHCA). Since 2005, the American Heart Association has recommended that individuals one year of age or older who sustain OHCA have an AED applied. Little is known about how often this occurs and what factors are associated with AED use in the pediatric population. Objectives: Our objective was to describe AED use in the pediatric population and to assess predictors of AED use when compared to adult patients. Methods: We conducted a secondary analysis of prospectively collected data from 29 U.S. cities that participate in the Cardiac Arrest Registry to Enhance Survival (CARES). Patients were included if they had a documented resuscitation attempt from October 1, 2005 through December 31, 2009 and were ‡1 year old. Patients were considered pediatric if they were less than 19 years old. AED use included application by laypersons and first responders. Hierarchical multivariable logistic regression analysis was used to estimate the associations between age and AED use. Results: There were 19,559 OHCAs included in this analysis, of which 239 (1.2%) occurred in pediatric patients. Overall AED use in the final sample was 5,517, with 1,751 (8.9%) total survivors. AEDs were applied less often in pediatric patients (19.7%, 95% CI: 14.6%-24.7% vs 28.3%, 95% CI: 27.7%-29.0%). Within the pediatric population, only 35.4% of patients with a shockable rhythm had an AED used. In all pediatric patients, regardless of presenting rhythm, AED use demonstrated a statistically significant increase in return of spontaneous circulation (AED used 29.8%, 95% CI: 16.2-43.4 vs AED not used 16.8%, 95% CI: 11.4-22.1, p < 0.05), although there was no significant increase in survival to hospital discharge (AED used 12.8%; AED not used 5.2%; p = 0.057). In the adjusted model, pediatric age was independently associated with failure to use an AED (OR 0.61, 95% CI: 0.42-0.87) as was female sex (OR 0.88, 95% CI: 0.81-0.95). Patients who had a public arrest (OR 1.35, 95% CI: 1.24-1.46) or one that was witnessed by a bystander (OR 1.20. 95%: CI 1.11-1.29) were also predictive of AED use. Conclusion: Pediatric patients who experience OHCA are less likely to have an AED used. Continued education of first responders and the lay public to increase AED use in this population is necessary. Does Implementation of a Therapeutic Hypothermia Protocol Improve Survival and Neurologic Outcomes in all Comatose Survivors of Sudden Cardiac Arrest? Ken Will, Michael Nelson, Abishek Vedavalli, Renaud Gueret, John Bailitz Cook County (Stroger), Chicago, IL Background: The American Heart Association (AHA) currently recommends therapeutic hypothermia (TH) for out of hospital comatose survivors of sudden cardiac arrest (CSSCA) with an initial rhythm of ventricular fibrillation (VF). Based on currently limited data, the AHA further recommends that physicians consider TH for CSSCA, from both the out and inpatient settings, with an initial non-VF rhythm. Objectives: Investigate whether a TH protocol improves both survival and neurologic outcomes for CSSCA, for out and inpatients, with any initial rhythm, in comparison to outcomes previously reported in literature prior to TH. Methods: We conducted a prospective observational study of CSSCA between August 2009 and May 2011 whose care included TH. The study enrolled eligible consecutive CSSCA survivors, from both out and inpatient settings with any initial arrest rhythm. Primary endpoints included survival to hospital discharge and neurologic outcomes, stratified by SCA location, and by initial arrest rhythm. Results: Overall, of 27 eligible patients, 11 (41%, 95% CI 22-66%) survived to discharge, 7 (26%, 95% CI 9-43%) with at least a good neurologic outcome. Twelve were out and 15 were inpatients. Among the 12 outpatients, 6 (50%, 95% CI 22-78%) survived to discharge, 5 (41%, 95% CI 13-69%) with at least a good neurologic outcome. Among the 15 inpatients, 5 (33%, 95% CI 9-57) survived to discharge, 2 (13%, 95% CI 0-30%) with at least a good neurologic outcome. By initial rhythm, 6 patients had an initial rhythm of VF/T and 21 non-VF/T. Among the 6 patients with an initial rhythm of VF/T, 4 (67%, CI 39-100%) survived to discharge, all 4 with at least a good outcome, including 3 out and 1 inpatients. Among the 21 patients with an initial rhythm of non-VF/T, 7 (33%, CI 22-53%) survived to discharge, 3 (14%, CI 0-28%) with at least a good neurologic outcome, including 2 out and 1 inpatients. Conclusion: Our preliminary data initially suggest that local implementation of a TH protocol improves survival and neurologic outcomes for CSSCA, for out and inpatients, with any initial rhythm, in comparison to outcomes previously reported in literature prior to TH. Subsequent research will include comparison to local historical controls, additional data from other regional TH centers, as well as comparison of different cooling methods. Protocolized Background: Therapeutic hypothermia (TH) has been shown to improve the neurologic recovery of cardiac arrest patients who experience return of spontaneous circulation (ROSC). It remains unclear as to how earlier cooling and treatment optimization influence outcomes. Objectives: To evaluate the effects of a protocolized use of early sedation and paralysis on cooling optimization and clinical outcomes in survivors of cardiac arrest. Methods: A 3-year (2008-2010), pre-post intervention study of patients with ROSC after cardiac arrest treated with TH was performed. Those patients treated with a standardized order set which lacked a uniform sedation and paralytic order were included in the pre-intervention group, and those with a standardized order set which included a uniform sedation and paralytic order were included in the post-intervention group. Patient demographics, initial and discharge Glasgow Coma Scale (GCS) scores, resuscitation details, cooling time variables, severity of illness as measured by the APACHE II score, discharge disposition, functional status, and days to death were collected and analyzed using Student's t-tests, Man-Whitney U tests, and the Log-Rank test. Results: 232 patients treated with TH after ROSC were included, with 107 patients in the pre-intervention group and 125 in the post-intervention group. The average time to goal temperature (33°C) was 227 minutes (pre-intervention) and 168 minutes (post-intervention) (p = 0.001). A 2-hour time target was achieved in 38.6% of the patients (post-intervention) compared to 24.5% in the pre-group (p = 0.029). Twenty-eight day mortality was similar between groups (65.4% and 65.3%) though hospital length of stay (10 days pre-and 8 days post-intervention) and discharge GCS (13 preand 14-post-intervention) differed between cohorts. More post-intervention patients were discharged to home (55.8%) compared to 43.2% in the pre-intervention group. Conclusion: Protocolized use of sedation and paralysis improved time to goal temperature achievement. These improved TH time targets were associated with improved neuroprotection, GCS recovery, and disposition outcome. Standardized sedation and paralysis appears to be a useful adjunct in induced TH. Background: CT is increasingly used to assess children with signs and symptoms of acute appendicitis (AA) though concerns regarding long-term risk of exposure to ionizing radiation have generated interest in methods to identify children at low risk. Objectives: We sought to derive a clinical decision rule (CDR) of a minimum set of commonly used signs and symptoms from prior studies to predict which children with acute abdominal pain have a low likelihood of AA and compared it to physician clinical impression (PCI). Methods: We prospectively analyzed 420 subjects aged 2 to 20 years in 11 U.S. emergency departments with abdominal pain plus signs and symptoms suspicious for AA within the prior 72 hours. Subjects were assessed by study staff unaware of their diagnosis for 17 clinical attributes drawn from published appendicitis scoring systems and physicians responsible for physical examination estimated the probability of AA based on PCI prior to their medical disposition. Based on medical record entry rate, frequently used CDR attributes were evaluated using recursive partitioning and logistic regression to select the best minimum set capable of discriminating subjects with and without AA. Subjects were followed to determine whether imaging was used and use was tabulated by both PCI and the CDR to assess their ability to identify patients who did or did not benefit based on diagnosis. Results: This cohort had a 27.3% prevalence (118/431 subjects) of AA. We derived a CDR based on the absence of two out of three of the following attributes: abdominal tenderness, pain migration, and rigidity/ guarding had a sensitivity of 89.8% (95% CI: 83.1-94.1), specificity of 47.6% (95% CI: 42.1-53.1), NPV of 92.5% (95% CI: 87.4-95.7), and negative likelihood ratio of 0.21 (95% CI: 0.12-0.37). The PCI set at AA <30% pre-test probability had a sensitivity of 94.1% (95% CI: 88.3-97.1), specificity of 49.4% (95% CI: 43.9-54.9), NPV of 95.7% (95% CI: 91.3-97.9), and negative likelihood ratio of 0.12 (95% CI: 0.06-0.25). The methods each classified 37% of the patients as low risk for AA. Our CDR identified 29.1% (43/148) of low risk subjects who received CT but being AA (-), could have been spared CT, while the PCI identified 20.1% (30/149). Conclusion: Compared to physician clinical impression, our clinical decision rule can identify more children at low risk for appendicitis who could be managed more conservatively with careful observation and avoidance of CT. Negative Background: Abdominal pain is the most common complaint in the ED and appendicitis is the most common indication for emergency surgery. A clinical decision rule (CDR) identifying abdominal pain patients at a low risk for appendicitis could lead to a significant reduction in CT scans and could have a significant public health impact. The Alvarado score is one of the most widely applied CDRs for suspected appendicitis, and a low modified Alvarado score (less than 4) is sometimes used to rule out acute appendicitis. The modified Alvarado score has not been prospectively validated in ED patients with suspected appendicitis. Objectives: We sought to prospectively evaluate the negative predictive value of a low modified Alvarado score (MAS) in ED patients with suspected appendicitis. We hypothesized that a low MAS (less than 4) would have a sufficiently high NPV (>95%) to rule out acute appendicitis. Methods: We enrolled patients greater than or equal to 18 years old who were suspected of having appendicitis (listed as one of the top three diagnosis by the treating physician before ancillary testing) as part of a prospective cohort study in two urban academic EDs from August 2009 to April 2010. Elements of the MAS and the final diagnosis were recorded on a standard data form for each subject. The sensitivity, specificity, negative predictive value (NPV), and positive predictive value (PPV) were calculated with 95% CI for a low MAS and final diagnosis of appendicitis. Background: Evaluating children for appendicitis is difficult and strategies have been sought to improve the precision of the diagnosis. Computed tomography is now widely used but remains controversial due to the large dose of ionizing radiation and risk of subsequent radiation-induced malignancy. Objectives: We sought to identify a biomarker panel for use in ruling out pediatric acute appendicitis as a means of reducing exposure to ionizing radiation. Methods: We prospectively enrolled 431 subjects aged 2 to 20 years presenting in 11 U.S. emergency departments with abdominal pain and other signs and symptoms suspicious for acute appendicitis within the prior 72 hours. Subjects were assessed by study staff unaware of their diagnosis for 17 clinical attributes drawn from appendicitis scoring systems and blood samples were analyzed for CBC differential and 5 candidate proteins. Based on discharge diagnosis or post-surgical pathology, the cohort exhibited a 27.3% prevalence (118/431 subjects) of appendicitis. Clinical attributes and biomarker values were evaluated using principal component, recursive partitioning, and logistic regression to select the combination that best discriminated between those subjects with and without disease. Mathematical combination of three inflammation-related markers in a panel comprised of myeloid-related protein 8/14 complex (MRP), C-reactive protein (CRP), and white blood cell count (WBC) provided optimal discrimination. Results: This panel exhibited a sensitivity of 98% (95% CI, 94-100%), a specificity of 48% (95% CI, 42-53%), and a negative predictive value of 99% (95% CI, 95-100%) in this cohort. The observed performance was then verified by testing the panel against a pediatric subset drawn from an independent cohort of all ages enrolled in an earlier study. In this cohort, the panel exhibited a sensitivity of 95% (95% CI, 87-98%), a specificity of 41% (95% CI, 34-50%), and a negative predictive value of 95% (95% CI, 87-98%). Conclusion: AppyScore is highly predictive of the absence of acute appendicitis in these two cohorts. If these results are confirmed by a prospective evaluation currently underway, the AppyScore panel may be useful to classify pediatric patients presenting to the emergency department with signs and symptoms suggestive of, or consistent with, acute appendicitis and thereby sparing many patients ionizing radiation. Background: There are no current studies on the tracking of emergency department (ED) patient dispersal when a major ED closes. This study demonstrates a novel way to track where patients sought emergency care following the closure of Saint Vincent's Catholic Medical Center (SVCMC) in Manhattan by using de-identified data from a health information exchange, the New York Clinical Information Exchange (NYCLIX). NYCLIX matches patients who have visited multiple sites using their demographic information. On April 30, 2010, SVCMC officially stopped providing emergency and outpatient services. We report the patterns in which patients from SVCMC visited other sites within NYCLIX. Objectives: We hypothesize that patients often seek emergency care based on geography when a hospital closes. Methods: A retrospective pre-and post-closure analysis was performed of SVCMC patients visiting other hospital sites. The pre-closure study dates were January 1, 2010-March 31, 2010. The post closure study dates were May 1, 2010-July 31, 2010. A SVCMC patient was defined as a patient with any SVCMC encounter prior to its closure. Using de-identified aggregate count data, we calculated the average number of visits per week by SVCMC patients at each site (Hospital A-H). We ran a paired t-test to compare the pre-and post-closure averages by site. The following specifications were used to write the database queries: Of patients who had one or more prior visits to SVCMC for each day within the study return the following: a. EID: a unique and meaningless proprietary ID generated within the NYCLIX Master Patient Index (MPI). b. Age: Thru the age of 89. Persons over 90 were listed as ''90 + '' c. Ethnicity/Race d. Type of visit: emergency e. Location of visit: specific NYCLIX site. Results: Nearby hospitals within 2 miles saw the highest number of increased ED visits after SVCMC closed. This increase was seen until about 5 miles. Hospitals >5 miles away did not see any significant changes in ED visits. See table. Conclusion: When a hospital and its ED close down, patients seem to seek emergency care at the nearest hospital based on geography. Other factors may include the patient's primary doctor, availabilities of outpatient specialty clinics, insurance contracts, or preference of ambulance transports. This study is limited by the inclusion of data from only the eight hospitals participating in NYCLIX at the time of the SVCMC closure. Upstream Methods: Data were collected on all ED EMS arrivals from the metro Calgary (population 1.1 million) area to its three urban adult hospitals. The study phases consisted of the 7 months from February to October 2010 (pre-OCP) compared against the same months in 2011 (post-OCP). Data from the EMS operational database and the Regional Emergency Department Information System (REDIS) database were linked. The primary analysis examined the change in EMS offload delay defined as the time from EMS triage arrival until patient transfer to an ED bed. A secondary analysis evaluated variability in EMS offload delay between receiving EDs. Conclusion: Implementation of a regional overcapacity protocol to reduce ED crowding was associated with an important reduction in EMS offload delay, suggesting that policies that target hospital processes have bearing on EMS operations. Variability in offload delay improvements is likely due to site-specific issues, and the gains in efficiency correlate inversely with acuity. Methods: A pre-post intervention study was conducted in the ED of an adult university teaching hospital in Montreal (annual visits = 69 000). The RAZ unit (intervention), created to offload the ACU of the main ED, started operating in January, 2011. Using a split flow management strategy, patients were directed to the RAZ unit based on patient acuity level (CTAS code 3 and certain code 2), likelihood to be discharged within 12 hours, and not requiring an ED bed for continued care. Data were collected weekdays from 9:00 to 21:00 for 4 months (September -December 2008) (pre-RAZ) and for 1.5 months (February -March 2011) (post-RAZ). In the ACU of the main ED, research assistants observed and recorded cubicle access time, and nurse and physician assessment times. Databases were used to extract socio-demographics, ambulance arrival, triage code, chief complaint, triage and registration time, length of stay, and ED occupancy. Background: Telephone follow-up after discharge from the ED is useful for treatment and quality assurance purposes. ED follow-up studies frequently do not achieve high (i.e. ‡ 80%) completion rates. Objectives: To determine the influence of different factors on the telephone follow-up rate of ED patients. We hypothesized that with a rigorous follow-up system we could achieve a high follow-up rate in a socioeconomically diverse study population. Methods: Research assistants (RAs) prospectively enrolled adult ED patients discharged with a medication prescription between November 15, 2010 and September 9, 2011 from one of three EDs affiliated with one health care system: (A) academic Level I trauma center, (B) community teaching affiliate, and (C) community hospital. Patients unable to provide informed consent, non-English speaking, or previously enrolled were excluded. RAs interviewed subjects prior to ED discharge and conducted a telephone follow-up interview 1 week later. Follow-up procedures were standardized (e.g. number of calls per day, times to place calls, obtaining alternative numbers) and each subject's follow-up status was monitored and updated daily through a shared, web-based data system. Subjects who completed follow-up were mailed a $10 gift card. We examined the influence of patient (age, sex, race, insurance, income, marital status, usual major activity, education, literacy level, health status), clinical (acuity, discharge diagnosis, ED length of stay, site), and procedural factors (number and type of phone numbers received from subjects, offering two gift cards for difficult to reach subjects) on the odds of successful followup using multivariate logistic regression. Results: Of the 3,940 enrolled, 45% were white, 59% were covered by Medicaid or uninsured, and 44% reported an annual household income of <$26,000. 86% completed telephone follow-up with 41% completing on the first attempt. The table displays the factors associated with successful follow-up. In addition to patient demographics and lower acuity, obtaining a cell phone or multiple phone numbers as well as offering two gift cards to a small number of subjects increased the odds of successful follow-up. Conclusion: With a rigorous follow-up system and a small monetary incentive, a high telephone follow-up rate is achievable one week after an ED visit. Methods: An interrupted time-series design was used to evaluate the study question. Data regarding adherence with the following pneumonia core measures were collected pre-and post-implementation of the enhanced decision-support tool: blood cultures prior to antibiotic, antibiotic within 6 hours of arrival, appropriate antibiotic selection, and mean time to antibiotic administration. Prescribing clinicians were educated on the use of the decision-support tool at departmental meetings and via direct feedback on their cases. Results: During the 33-month study period, complete data were collected for 1185 patients diagnosed with CAP: 613 in the pre-implementation phase and 572 post-implementation. The mean time to antibiotic administration decreased by approximately one minute from the pre-to post-implementation phase, a change that was not statistically significant (p = 0.824). The proportion of patients receiving blood cultures prior to antibiotics improved significantly (p < 0.001) as did the proportion of patients receiving antibiotics within 6 hours of ED arrival (p = 0.004). A significant improvement in appropriate antibiotic selection was noted with 100% of patients experiencing appropriate selection in the post-phase, p = 0.0112. Use of the available support tool increased throughout the study period, v 2 = 78.13, df = 1, p < 0.0001. All improvements were maintained 15 months following the study intervention. Conclusion: In this academic ED, introduction of an enhanced electronic clinical decision support tool significantly improved adherence to CMS pneumonia core measures. The proportion of patients receiving blood cultures prior to antibiotics, antibiotics within 6 hours, and appropriate antibiotics all improved significantly after the introduction of an enhanced electronic clinical decision support tool. Background: Emergency medicine (EM) residency graduates need to pass both the written qualifying exam and oral certification exam as the final benchmark to achieve board certification. The purpose of this project is to obtain information about the exam preparation habits of recent EM graduates to allow current residents to make informed decisions about their individual preparation for the ABEM written qualifying and oral certification exams. Objectives: The study sought to determine the amount of residency and individual preparation, to determine the extent of the use of various board review products, and to elicit evaluations of the various board review products used for the ABEM qualifying and certification exams. Methods: Design: An online survey instrument was used to ask respondents questions about residency preparation and individual preparation habits, as well as the types of board review products used in preparing for the EM boards. Participants: As greater than 95% of all EM graduates are EMRA members, an online survey was sent to all EMRA members who have graduated for the past three years. Observations: Descriptive statistics of types of preparation, types of resources, time, and quantitative and qualitative ratings for the various board preparation products were obtained from respondents. Results: A total of 520 respondents spent an average of 9.1 weeks and 15 hours per week preparing for the written qualifying exam and spent an average of 5 weeks and 7.8 hours per week preparing for the oral certification exam. In preparing for the written qualification exam, 90% used a preparation textbook with 16% using more than one textbook and 47% using a board preparation course. In preparing for the oral qualifying exam, 56% used a preparation textbook while 34% used a preparation course. Sixty-seven percent of respondents reported that their residency programs had a formalized written qualifying exam preparation curriculum of which 48% was centered on the annual in-training exam. Eight-five percent of residency programs had a formalized oral certification exam preparation. Respondents reported spending on average $715 preparing for the qualifying exam and $509 for the certification exam. Conclusion: EM residents spend significant amounts of time and money and make use of a wide range of residency and commercially available resources in preparing for the ABEM qualifying and certification exams. Background: Communication and professionalism skills are essential for EM residents but are not wellmeasured by selection processes. The Multiple Mini-Interview (MMI) uses multiple, short structured contacts to measure these skills. It predicts medical school success better than the interview and application. Its acceptability and utility in EM residency selection is unknown. Objectives: We theorized that the MMI would provide novel information and be acceptable to participants. Methods: 71 interns from three programs in the first month of training completed an eight-station MMI developed to focus on EM topics. Pre-and post-surveys assessed reactions using five-point scales. MMI scores were compared to application data. Results: EM grades correlated with MMI performance (F(1.66) = 4:18, p < 0.05) with honors students having higher MMI summary scores. Higher third year clerkship grades trended to higher MMI performance means, although not significantly. MMI performance did not correlate with a match desirability rating and did not predict other individual components of the application including USMLE Step 1 or USMLE Step 2. Participants preferred a traditional interview (mean difference = 1.36, p < 0.0001). A mixed format was preferred over a pure MMI (mean difference = 1.1, p < 0.0001). Preference for a mixed format was similar to a traditional interview. MMI performance did not significantly correlate with preference for the MMI; however, there was a trend for higher performance to associate with higher preference (r = 0.15, t(65) = 1.19, n.s.) Performance was not associated with preference for a mix of interview methods (r = 0.08, t(65) = 0.63, n.s.). Conclusion: While the MMI alone was viewed less favorably than a traditional interview, participants were receptive to a mixed methods interview. The MMI appears to measure skills important in successful completion of an EM clerkship and thus likely EM residency. Future work will determine whether MMI performance correlates with clinical performance during residency. Background: The annual American Board of Emergency Medicine (ABEM) in-training exam is a tool to assess resident progress and knowledge. When the New York-Presbyterian (NYP) EM Residency Program started in 2003, the exam was not emphasized and resident performance was lower than expected. A course was implemented to improve residency-wide scores despite previous EM literature failing to exhibit improvements with residency-sponsored in-training exam interventions. Objectives: To evaluate the effect of a comprehensive, multi-faceted course on residency-wide in-training exam performance. Methods: The NYP EM Residency Program, associated with Cornell and Columbia medical schools, has a 4year format with 10-12 residents per year. An intensive 14-week in-training exam preparation program was instituted outside of the required weekly residency conferences. The program included lectures, pre-tests, high-yield study sheets, and remediation programs. Lectures were interactive, utilizing an audience response system, and consisted of 13 core lectures (2-2.5 hours) and three review sessions. Residents with previous in-training exam difficulty were counseled on designing their own study programs. The effect on intraining exam scores was measured by comparing each resident's score to the national mean for their postgraduate year (PGY). Scores before and after course implementation were evaluated by repeat measures regression modeling. Overall residency performance was evaluated by comparing residency average to the national average each year and by tracking ABEM national written examination pass rates. Results: Resident performance improved following course implementation. Following the course's introduction, the odds of a resident beating the national mean increased by 3.9 (95% CI 1.9-7.3) and the percentage of residents exceeding the national mean for their PGY year increased by 37% (95% CI 23%-52%). Following course introduction, the overall residency mean score has outperformed the national exam mean annually and the first-time ABEM written exam board pass rate has been 100%. Conclusion: A multi-faceted in-training exam program centered around a 14-week course markedly improved overall residency performance on the in-training exam. Limitations: This was a before and after evaluation as randomizing residents to receive the course was not logistically or ethically feasible. .0 years of practice. Among the nonresidency trained, non-boarded EM physicians, the percentage of individuals with board actions against them was significantly higher (6.9% vs. 1.9%, 95% CI for difference of 5.0% = 3.1 to 7.5%), but the incidence of actions was not significant (1.3 vs. 3.4 events/ 1000 years of practice, 95% CI for difference of 2.1/ 1000 = )3/1000 to +8/1000), but the power to detect a difference was 30%. Conclusion: In this study population, EM-trained physicians had significantly fewer total state medical board disciplinary actions against them than non-EM trained physicians, but when adjusted for years of practice (incidence), the difference was not significantly different at the 95% confidence level. The study was limited by low power to detect a difference in incidence. Objectives: We chose pain documentation as a long term project for quality improvement in our EMS system. Our objectives were to enhance the quality of pain assessment, to reduce patient suffering and pain through improved pain management, to improve pain assessment documentation, to improve capture of initial and repeat pain scales, and to improve the rate of pain medication. This study addressed the aim of improving pain assessment documentation. Methods: This was a quasi-experiment looking at paramedic documentation of the PQRST mnemonic and pain scales. Our intervention consisted of mandatory training on the importance and necessity of pain assessment and treatment. In addition to classroom training, we used rapid cycle individual feedback and public posting of pain documentation rates (with unique IDs) for individual feedback. The categories of chief complaint studied were abdominal pain, blunt injury, burn, chest pain, headache, non-traumatic body pain, and penetrating injury. We compared the pain documentation rates in the 3 months prior to intervention, the 3 months of intervention, and 3 months post intervention. Using repeated-measures ANOVA, we compared rates of paramedic documentation over time. Results: Our EMS system transported 42166 patients during the study period, of whom 15490 were for painful conditions in the defined chief complaint categories. There were 168 paramedics studied, of whom 149 had complete data. Documentation increased from 1819 of 5122 painful cases (35.5%) in Qtr 1 to 4625 of 5180 painful cases (89.3%) in Qtr 3. The trend toward increased rates of pain documentation over the three quarters was strongly significant (p < 0.001). Paramedics were significantly more likely to document pain scales and PQRST assessments over the course of the study with the highest rates of documentation compliance in the final 3-month period. Conclusion: A focused intervention of education and individual feedback through classroom training, one on one training, and public posting improves paramedic documentation rates of perceived patient pain. Background: Emergency medical services (EMS) systems are vital in the identification, assessment, and treatment of trauma, stroke, myocardial infarction, and sepsis and improving early recognition, resuscitation, and transport to adequate medical facilities. EMS personnel provide similar first-line care for patients with syncope, performing critical actions such as initial assessment and treatment as well as gathering key details of the event. Objectives: To characterize emergency department patients with syncope receiving initial care by EMS and their role as initial providers. Methods: We prospectively enrolled patients over 18 years of age who presented with syncope or near syncope to a tertiary care ED with 72,000 annual patient visits from June 2009 to June 2011. We compared patient age, sex, comorbidities, and 30-day cardiopulmonary adverse outcomes (defined as myocardial infarction, pulmonary embolism, significant cardiac arrhythmia, and major cardiovascular procedure) between EMS and non-EMS patients. Descriptive statistics, two-sided ttests, and chi-square testing were used as appropriate. Results: Of the 669 patients enrolled, 254 (38.0%) arrived by ambulance. The most common complaint in patients transported by EMS was fainting (50.4%) or dizziness (45.7%); syncope was reported in 28 (11.0%). Compared to non-EMS patients, those who arrived by ambulance were older (mean age (SD) 64.5 (18.7), vs. 60.6 (19.5) years, p = 0.012). There were no differences in the proportion of patients with hypertension (20.0% vs 32.0%, p = 0.75), coronary artery disease (8.85% vs 15.3%, p = 0.67), diabetes mellitus (6.5% vs 9.5%, p = 0.57), or congestive heart failure (3.8% vs 6.6%, p = 0.74). Sixtynine (10.8%) patients experienced a cardiopulmonary event within 30 days. Twenty-eight (4.4%) patients who arrived by ambulance and 41 (6.4%) non-EMS patients had a subsequent cardiopulmonary adverse event (RR 1.08, 95%CI 0.68-1.69) within 30 days. The table tabulates interventions provided by EMS prior to ED arrival. Conclusion: EMS providers care for more than one third of ED syncope patients and often perform key interventions. EMS systems offer opportunities for advancing diagnosis, treatment, and risk stratification in syncope patients. Background: Abdominal pain is the most common reason for visiting an emergency department (ED), and abdominopelvic computed tomography (APCT) use has increased dramatically over the past decade. Despite this, there has been no significant change in rates of admission or diagnosis of surgical conditions. Objectives: To assess whether an electronic accountability tool affects APCT ordering in ED patients with abdominal or flank pain. We hypothesized that implementation of an accountability tool would decrease APCT ordering in these patients. Methods: Before and after study design using an electronic medical record at an urban academic ED from Jul-Nov 2011, with the electronic accountability tool implemented in Oct 2011 for any APCT order. Inclusion criteria: age >= 18 years, non-pregnant, and chief complaint or triage pain location of abdominal or flank pain. Starting Oct 17 th , 2011, resident attempts to order APCT triggered an electronic accountability tool which only allowed the order to proceed if approved by the ED attending physician. The attending was prompted to enter the primary and secondary diagnoses indicating APCT, agreement with need for CT and, if no agreement, who was requesting this CT (admitting or consulting physician), and their pretest probability (0-100) of the primary diagnosis. Patients were placed into two groups: those who presented prior to (PRE) and after (POST) the deployment of the accountability tool. Background: There has been a paradigm shift in the diagnostic work-up for suspected appendicitis. EDbased staged protocols call for the use of ultrasound prior to CT scanning because of its lack of radiation, and the morbidity related to contrast. A barrier to implementation is the lack of 24/7 availability of ultrasound. Objectives: To evaluate the impact of the implementation of ED performed appendix ultrasounds (APUS) on CT utilization in the staged workup for appendicitis in the emergency department. Methods: We performed a quasi-experimental, before/ after study. We compared data from the first 8 months of 2009, before the availability of ED performed APUS, with the same interval in 2011 after introduction of ED APUS. We excluded patients who had appendectomies for reasons other than appendicitis or had been diagnosed prior to arrival. No patient identifiers were included in the analysis and the study was approved by the hospital IRB. We report the following descriptive statistics (percentages, sensitivities, and absolute utilization changes Conclusion: Implementation of an ED APUS in the staging work up of appendicitis was associated with a significant reduction in overall CT utilization in the ED. Objectives: This study aims to evaluate ED patients' knowledge of radiation exposure from CT and MRI scans as well as the long-term risk of developing cancer. We hypothesize that ED patients will have a poor understanding of the risks, and will not know the difference between CT and MRI. Methods: DESIGN -This was a cross-sectional survey study of adult, English-speaking patients at two EDs from 6/13/11-8/13/11. SETTING -One location was a tertiary care center with an annual ED census of 45,000 patient visits and the other was a community hospital with annual ED census of 35,000 patient visits. OBSER-VATIONS -The survey consisted of six questions evaluating patients' understanding of radiation exposure from CT and MRI as well as long-term consequences of radiation exposure. Patients were then asked their age, sex, race, highest level of education, annual household income, and whether they considered themselves health care professionals. Results: There were 500 participants in this study, 315 (of 5,589 total) from the academic center and 185 (of 4,988 total) from the community hospital during the study period. Overall, only 10% (95% CI 7-12%) of participants understood the radiation risks associated with CT scanning. 60% (95% CI 56-65%) of patients believed that an abdominal CT had the same or less radiation as a chest x-ray. 25% (95% CI 21-29%) believed that there was an increased risk of developing cancer from repeated abdominal CTs. Only 22% (95% CI 19-26%) of patients knew that MRI scans had less radiation than CT. 44% (95% CI 39-49%) either didn't know or believed that repeated MRIs were associated with an increased risk of developing cancer. Higher educational level, household income, and identification as a health care professional all were associated with correct responses, but even within these groups, a majority gave incorrect responses. Conclusion: In general, ED patients do not understand the radiation risks associated with advanced imaging modalities. We need to educate these patients so that they can make informed decisions about their own health care. Background: Homelessness has been associated with many poor health outcomes and frequent ED utilization. It has been shown that frequent use of the ED in any given year is not a strong predictor of subsequent use. Identifying a group of patients who are chronic high users of the ED could help guide intervention. Objectives: The purpose of this study is to identify if homelessness is associated with chronic ED utilization. Methods: A retrospective chart review was accomplished looking at the records of the 100 most frequently seen patients in the ED for each year from 2005-2010 at a large, urban academic hospital with an annual volume of 55,000. Patients' visit dates, chief complaints, dispositions, and housing status were reviewed. Homelessness was defined by self-report at registration. Patients were categorized according to their ED utilization with those seen >4 times in at least three of the five years of the study identified as Chronic High Utilizers; and those who visited the ED >20 times in at least three of the five years of the study were identified as Chronic Ultra-High Utilizers. Descriptive statistics with confidence intervals were calculated, and comparisons were made using non-parametric tests. Results: During the 5-year study period, 189,371 unique patients were seen, of whom 0.7% patients were homeless. 335 patients were identified as frequent users. There were patients who presented on the top 100 utilizer lists from multiple years. 67 (20%, 95%CI 16-25) patients were identified as homeless. 148 patients were seen >4 times in at least three of the 5 years and 23 (16%, 11-22) were homeless. 12 patients were seen >20 times in at least three of the 5 years and 5 (41%, 19-68) were homeless. Our facility has a 40% admission rate; however, non homeless Chronic Ultra-High Utilizers had admission rates of 24% and homeless Chronic Ultra-High Utilizers were admitted 14%. Conclusion: Chronic Ultra-High Utilizers of our ED are disproportionately homeless and present with lower severity of illness. These patients may prove to be a cost-effective group to house or otherwise involve with aggressive case management. The debate over homeless housing programs and case management solutions can be sharpened by better defining the groups who would most benefit and who represent the greatest potential saving for the health system. Background: The prevalence of obese patients presenting to our emergency department (ED) is 38%: obese patients present in disproportionate number compared to the general population (US rate = 27%). In spite of this, there is a disconnect in patients' perceptions of weight and health: many patients underestimate their weight and report a key barrier to weight loss is patient-provider communications; such discussions have proven to be highly effective in smoking, drug, and alcohol cessation, an important initial step toward promoting wellness. Information about patient provider communication is essential for designing and implementing emergency department (ED) based interventions to help increase patient awareness about weightrelated medical issues and provide counseling for weight reduction. Objectives: We assessed patients' perceptions about obesity as disease and patient communication with their providers through two questions: Do you believe your present weight is damaging to your health? Has a doctor or other health professional every told you that you are overweight? Methods: A descriptive cross-sectional study was performed in an academic tertiary care ED. A randomized sample of patients (every fifth) presenting to the ED (n = 453) was enrolled. Pregnant patients, patients who were medically unstable, cognitively impaired, or who were unable or unwilling to provide informed consent were excluded. Percentages of ''yes'' and ''no'' are reported for each question based on patient BMI, ethnicity, sex, and the number of comorbid conditions. Regression analysis was used to determine differences in responses between subgroups. Results: Among overweight/obese, white/black patients, 42.5% do not feel their weight is damaging to their health and 54.7% reported they have not been told by a doctor they are overweight. Of Individuals who have been told by a doctor they were overweight, 23.2% still believe their present weight is not damaging to their health. Of individuals who have not been told by a doctor they were overweight, 41.5% believe their present weight is damaging to their health. Differences in race and age were not found. P values <0.05 for all results. Conclusion: Our data point toward a disconnect regarding patients' perceptions of health and weight. Timely education about the burden of obesity may lead to a decrease in its overall prevalence. (Originally submitted as a ''late-breaker.'') Objectives: To examine the attitudes and expectations of patients admitted for inpatient care following an emergency department visit. Methods: A descriptive study was done by surveying a voluntary sample of adult patients (n = 210) admitted to the hospital from the emergency department in one urban teaching hospital in the Midwest. A short, ninequestion survey was developed to assess patient attitudes and expectations towards HIV testing, consent, and requirements. Analyses consisted of descriptive statistics, correlations, and chi-square analyses. Results: The majority of patients report that HIV testing should be a routine part of health care screening (82.4%) and that the hospital should routinely test admitted patients for HIV (78.6%). Despite these overall positive attitudes towards HIV testing, the data also suggest that patients have strong attitudes towards consent requirements with 80% acknowledging that HIV testing requires special consent and 72% reporting that separate consent should be required. The data also showed a statistically significant difference in the proportion of patients who believed that HIV testing is a part of routine health care screening by race (v2 = 6.825, df = 1, p = .009). Conclusion: Patients attitudes and expectations towards routine HIV testing are consistent with the CDC recommendations. Emergency departments are an ideal setting to initiate HIV testing and the findings suggest that patients expect hospital policies outline procedures for obtaining consent and screening all patients who are admitted to the hospital from the ED. Results: The analysis revealed a ''hot spot'', a cluster of 833 counties (24.5%) with high CA rates adjacent to counties with high CA rates, located across the southeastern US (P < 0.001). Within these counties, the average CA rate was 14% higher than the national average. A ''cool spot'', a cluster of 548 counties (16.1%) with low rates, was located across the Midwest (P < 0.001). In this cool spot the average CA rate was 12% lower than the national average. Figures 1 and 2 show US adjusted rates and spatial autocorrelation of CA deaths, respectively. Conclusion: We identify geographic disparities in CA mortality and describe the cardiac arrest belt in the southeastern US. A limitation of this analysis was the use of ICD-10 codes to identify cardiac arrest deaths; however, no other national data exist. An improved understanding of the drivers of this variability is essential to targeted prevention and treatment strategies, especially given the recent emphasis on development of cardiac resuscitation centers and cardiac arrest systems of care. An understanding of the relation between population density, cardiac arrest count, and cardiac arrest rate will be essential to the design of an optimized cardiac arrest system. We defined ED utilization during the past 12 months as non-users (0 visits), infrequent users (1-3 visits), frequent users (4-9 visits), and super-frequent users ( ‡10 visits). We compared demographic data, socioeconomic status, health conditions, and access to care between these ED utilization groups. Results: Overall, super-frequent use was reported by 0.4% of U.S. adults, frequent use by 2%, and infrequent ED use by 19%. Higher ED utilization was associated with increased self-reported fair to poor health (55% for super-frequent, 48% for frequent, 22% for infrequent, 10% for non-ED users). Frequent ED users were also more likely to be impoverished, with 31% of superfrequent, 25% of frequent, 13% of infrequent, and 9% of non-ED users reporting a poverty-income ratio <1. Adults with higher ED utilization were more likely to report the ED as the place they usually go when sick (10% for super-frequent, 6% for frequent, 2% for infrequent, 0.5% for non-ED users). They also reported greater outpatient resource utilization, with 73% of super-frequent, 48% of frequent, 25% of infrequent, and 10% of non-ED users reporting ‡10 outpatient visits/year. Frequent ED users were also more likely than non-ED users to be covered by Medicaid (34% for super-frequent, 26% for frequent, 12% for infrequent, 5% for non-ED users). Conclusion: Frequent ED users were a vulnerable population with lower socioeconomic status, poor overall health, and high outpatient resource utilization. Interventions designed to divert frequent users from the ED should also focus on chronic disease management and access to outpatient services, rather than focusing solely on limiting ED utilization. Objectives: We explored factors associated with specialty provider willingness to provide urgent appointments to children insured by Medicaid/CHIP. Methods: As part of a mixed method study of child access to specialty care by insurance status, we conducted semi-structured qualitative interviews with a purposive sample of 26 specialists and 14 primary care physicians (PCPs) in Cook County, IL. Interviews were conducted from April to September 2009, until theme saturation was reached. Resultant transcripts and notes were entered into ATLAS.ti and analyzed using an iterative coding process to identify patterns of responses in the data, ensure reliability, examine discrepancies, and achieve consensus through content analysis. Results: Themes that emerged indicate that PCPs face considerable barriers getting publicly insured patients into specialty care and use the ED to facilitate this process. ''If I send them to the emergency room, I'm bypassing a number of problems. I'm fully aware that I'm crowding the emergency room.'' Specialty physicians reported that decisions to refuse or limit the number of patients with Medicaid/CHIP are due to economic strain or direct pressure from their institutions ''In the last budget revision, we were [told], 'You are losing money, so you need to improve your patient mix'''. In specialty practices with limited Medicaid/CHIP appointment slots, factors associated with appointment success included: high acuity or complexity, personal request from or an informal economic relationship with the PCP, geography, and patient hardship. ''If it's a really desperate situation and they can't find anybody else, I will make an exception''. Specialists also acknowledged that ''Patients who can't get an appointment go to the ER and then I am obligated to see them if they're in the system.'' Conclusion: These exploratory findings suggest that a critical linkage exists between hospital EDs and affiliated specialty clinics. As health systems restructure, there is an opportunity for EDs to play a more explicit role in improving care coordination and access to specialty care. Albert Amini, Erynne A. Faucett, John M. Watt, Richard Amini, John C. Sakles, Asad E. Patanwala University of Arizona, Tucson, AZ Background: Trauma patients commonly receive etomidate and rocuronium for rapid sequence intubation (RSI) in the ED. Due to the long duration of action of rocuronium and short duration of action of etomidate, these patients require prompt initiation of sedatives after RSI. This prevents the potential of patient awareness under pharmacological paralysis, which could be a terrifying experience. Objectives: The purpose of this study was to evaluate the effect of the presence of a pharmacist during traumatic resuscitations in the ED on the initiation of sedatives and analgesics after RSI. We hypothesized that pharmacists would decrease the time to provision of sedation and analgesia. Methods: This was an observational, retrospective cohort study conducted in a tertiary, academic ED that is a Level I trauma center. Consecutive adult trauma patients who received rocuronium in the ED for RSI were included during two time periods: 07/01/07 to 07/ 30/08 (Pre-phase -no pharmacy services in the ED) and 07/01/09 to 06/30/11 (Post-phase -pharmacy services in the ED). Since the pharmacist could not respond to all traumas in the post-phase, this was further categorized based on whether the pharmacist was present or absent at the trauma resuscitation. Data collected included patient demographics, baseline injury data, and medications used. The median time from RSI to initiation of sedatives and analgesics was compared between the pre-phase group (group 1), post-phase pharmacist absent group (group 2), and post-phase pharmacist present group (group 3) using the Kruskal-Wallis test. Results: A total of 200 patients were included in the study (group 1 = 100, group 2 = 70, and group 3 = 30). Median age was 35, 48.5, and 54.5 years in groups 1, 2, and 3, respectively (p = 0.005). There were no other differences between groups with regard to demographics, mechanism of injury, presence of traumatic brain injury, Glasgow Coma Scale score, vital signs, ED length of stay, or mortality. Median time between RSI and post-intubation sedative use was 13, 15, and 6 minutes in groups 1, 2 and 3, respectively (p < 0.001). Median time between RSI and post-intubation analgesia use was 80, 16, and 10 minutes in groups 1, 2, and 3, respectively (p < 0.001). The presence of a pharmacist during trauma resuscitations decreases time to provision of sedation and analgesia after RSI. Background: Outpatient antibiotics are frequently prescribed from the ED, and limited health literacy may affect compliance with recommended treatments. Objectives: Among patients stratified by health literacy level, multimodality discharge instructions will improve compliance with outpatient antibiotic therapy and follow-up recommendations. Methods: This was a prospective randomized trial that included consenting patients discharged with outpatient antibiotics from an urban county ED with an annual census of 100,000. Patients unable to receive text messages or voicemails were excluded. Health literacy was assessed using a validated health literacy assessment, the Newest Vital Sign (NVS). Patients were randomized to a discharge instruction modality: 1) usual care, typed and verbal medication and case-specific instructions; 2) usual care plus text messaged instructions sent to the patient's cell phone; or 3) usual care plus voicemailed instructions sent to the patient's cell phone. Antibiotic pick-up was verified with the patient's pharmacy at 72 hours. Patients were called at 30 days to determine antibiotic compliance. Z-tests were used to compare 72-hour antibiotic pickup and patient-reported compliance across instructional modality and NVS score groups. Results: 758 patients were included (55% female, median age 30, range 5 months to 71 years); 98 were excluded. 23% had an NVS score of 0-1, 31% 2-3, and 46% 4-6. The proportion of prescriptions filled at 72 hours varied significantly across NVS score groups; self-reported medication compliance at 30 days revealed no difference across different instructional modalities nor NVS scores (Table 1) . Conclusion: In this sample of urban ED patients, 72hour prescription pickup varied significantly by validated health literacy score, but not by instruction delivery modality. In this sample, patients with lower health literacy are at risk of not filling their outpatient antibiotics in a timely fashion. has been developed, validated, and utilized to study the processes of care involved in successful care transitions from inpatient to outpatient settings, but has not been utilized in the ED. Objectives: We hypothesized that the CTM-3 could be successfully implemented in the ED without differential item difficulty by age, sex, education, or race; and would be associated with measures of quality of care and likelihood of following physician recommendations. Methods: A descriptive study design based on exit surveys was used to measure CTM-3 scores and likelihood of following treatment recommendations. Surveys were administered to a daily cross-sectional sample of all patients leaving the ED between 7a-12a by research assistants in an urban academic ED setting for 3 weeks in November 2011. We report means and standard deviations, and analysis of variance to identify differences in CTM-3 scores for those who planned and did not plan to follow ED recommendations. Results: 750 surveys were completed; patients were 43 ± 19 years old, 58% black, 61% female, 56% with at least some college education, and 38% were admitted. Average CTM-3 score was 87.1 ± 21.6 (range 0-100). Scores were not associated with sex (p = 0.57), race (p = 0.19), or education level (p = 0.25). Lower CTM scores were associated with increasing age (p = 0.03), patient perceptions that the ED team was less likely to use words that they understood, listen carefully to them, inspire their confidence and trust, or encourage them to ask questions (all p < 0.01). Those who reported they were ''very likely'' to follow ED treatment had an average score of 89 ± 21, while those who were ''unlikely'' or ''very unlikely'' to follow ED treatment plans had an average score 47 ± 28 (p = 0.00). Conclusion: The CTM-3 performs well in the ED and exhibited only differential item difficulty by age; there was no significant difference by race, sex, or education level. Furthermore, it is highly associated with likelihood of following physician recommendations. Future studies will focus on CTM-3 scores ability to discriminate between patients who did or did not experience a subsequent ED visit or rehospitalization. Age and race were found to be significant predictors of the RACE pathway. Regression of the data by race revealed blacks (OR 1.9: CI 1.3-2.6; p < 0.0002), Hispanics (OR 3.0: CI 1.3-2.6; p = 0.0001), and Asians (OR 2.3: CI 1.1-4.9; p = 0.03), were more likely to enter the RACE cohort than were whites; however, much of this discrepancy is accounted for by age. The mean age of minority patients was 62 years, while white patients were older at 71 years (p = 0.002). Conclusion: In a diverse demographic population we found that racial minorities were presenting at younger ages for chest pain and were more likely to receive cardiac testing at bedside than their white counterparts; and hence, were selected to a lower level of care (nonmonitored unit Background: Expanding insurance coverage is designed to improve access to primary care and reduce use of emergency services. Whether expanding coverage achieves this is of paramount importance as the United States prepares for the Affordable Care Act. Objectives: We examined ED and outpatient department use after the State Children's Health Insurance Program (SCHIP) coverage expansion, focusing on adolescents (a major target group for SCHIP) versus young adults (not targeted). We hypothesized that coverage would increase use of outpatient services and emergency department services would decrease. Methods: Using the National Ambulatory Medical Care Survey and the National Hospital Ambulatory Medical Care Survey, we analyzed years 1992-1996 as baseline and then compared use patterns in 1999-2009 after SCHIP launch. Primary outcomes were populationadjusted annual visits to ED versus non-emergency outpatient settings. Interrupted time-series were performed on use rates to ED and outpatient departments between adolescents (11-18 years old) and young adults (19-29 years old) in the pre-SCHIP and SCHIP periods. Outpatient-to-ED ratios were calculated and compared across time periods. Results: The mean number of outpatient adolescent visits increased by 299 visits per 1000 persons (95% CI, 140-457), while there was no statistically significant increase in young adult outpatient visits across time periods. There was no statistically significant change in the mean number of adolescent ED visits across time periods, while young adult ED use increased by 48 visits per 1000 persons (95% CI, 24-73). The adolescent outpatient-to-ED ratio increased by 1.0 (95% CI, 0.49-1.6), while the young adults ratio decreased by 0.53 across time periods (95% CI, )0.90 to )0.16). Conclusion: Since SCHIP, adolescent non-ED outpatient visits increased while ED visits remained unchanged. In comparison to young adults, expanding insurance coverage to adolescents improved access to health care services and suggests a shift to non-ED settings. As an observational study we are unable to control for secular trends during this time period. Also as an ecological study we are unable to examine individual variation. Expanding insurance through the Affordable Care Act of 2010 will likely increase use of outpatient services but may not decrease emergency department volumes. Background: Cancer patients are receiving a greater proportion of their care on an outpatient basis. The effect of this change in oncology care patterns on ED utilization is poorly understood. Objectives: To examine the characteristics of ED utilization by adult cancer patients. Methods: Between July 2007 and March 2009, all new adult cancer patients referred to a tertiary care cancer centre were recruited into a study examining psychological distress. These patients were followed prospectively until September 2011. The collected data were linked to administrative data from three tertiary care EDs. Variables evaluated in this study included basic We have previously shown that reducing non-value-added activities through the application of the Lean process improvement methodology improves patient satisfaction, physician productivity and emergency department length of stay. Objectives: In this investigation, we tested the hypothesis that non-value-added activities reduce physician job satisfaction. Methods: To test this hypothesis, we conducted timemotion studies on attending emergency physicians working in an academic setting and categorized their activities into value-added (time in room with patient, time discussing cases and educating medical learners, time in room with patient and learner), necessary non-valueadded activities (charting, sign out, looking up labs), and unnecessary non-value-added activities (looking for things, looking for people, on the phone). The physicians were then surveyed using a 10-point Likert scale to determine their relative satisfaction with each of the individual tasks (1 worst part of day, 10 best part of day). Results: Physicians spent 46% of their shift performing value-added work, 38% of their shift performing necessary non-value-added activities, and 16% of their shift performing unnecessary non-value-added activities (waste). Weighted physician satisfaction (satisfaction X [percent time spent performing the activity / percent time engaged in activity category]) was highest when the physician was performing value-added work (8.75) compared to performing either necessary non-valueadded work (3.35) or waste (2.61). Conclusion: The attending physicians we studied spent the majority of their time performing non-value-added activities, which were associated with lower satisfaction. Application of process improvement techniques such as Lean, which focus on reducing non-value-added work, may improve emergency physician job satisfaction. Background: Rocuronium and succinylcholine are the most commonly used paralytics for rapid sequence intubation (RSI) in the ED. After RSI, patients need sustained sedation while they are mechanically ventilated. However, the longer duration of action of rocuronium may influence subsequent sedation dosing, while the patient is therapeutically paralyzed. Objectives: We hypothesized that patients who receive rocuronium would be more likely to receive lower doses of post-RSI sedation compared to patients who receive succinylcholine. Methods: This was an observational, retrospective cohort study conducted in a tertiary, academic ED. Consecutive adult patients, who received RSI using etomidate for induction of sedation between 07/01/09 to 06/30/10, were included. Patients were then categorized based on whether they received rocuronium or succinylcholine for paralysis. The dosing of post-RSI sedative infusions was compared at 0, 30, 60, and 120 minutes after initiation between the two groups using the Wilcoxon rank-sum test. Results: A total of 254 patients were included in the final analysis (rocuronium = 127, succinylcholine = 127). Mean age was 52 and 47 years in the rocuronium and succinylcholine groups, respectively (p = 0.04). There were no other baseline differences between groups with regard to demographics, reason for intubation, stroke, traumatic brain injury, Glasgow Coma Scale score, pain scores, or vital signs. In the overall cohort, 90.2% (n = 229) of patients were given a sedative infusion or bolus in the ED. Most patients were initiated on propofol (n = 169) or midazolam (n = 49) infusions. Median propofol infusion rates at 0, 30, 60, and 120 minutes were 20, 20, 27.5, and 30 mcg/kg/min in the rocuronium group and 20, 40, 45, and 45 mcg/kg/ min in succinylcholine group, respectively. The difference was statistically significant at 30 (p < 0.001) and 60 (p = 0.003) minutes. Median midazolam infusion rates at 0, 30, 60, and 120 minutes were 2, 2, 2, and 3 mg/hour in the rocuronium group and 2, 3, 4, and 4.5 mg/hour in succinylcholine group, respectively. The difference was statistically significant at 60 (p = 0.003) and 120 (p = 0.04) minutes. Conclusion: Patients who receive rocuronium are more likely to receive lower doses of sedative infusions post-RSI due to sustained therapeutic paralysis. This may put them at risk for being awake under paralysis. What is the Impact of the Implementation of an There was a difference in presenting pain (p < 0.001), stress (p < 0.001), and anxiety (p < 0.001) among patients that received an opioid in the ED. There was a difference in presenting pain (p < 0.001) for patients discharged with an opioid prescription, but not for stress (p = 0.32) or anxiety (p = 0.90). Conclusion: Patient-reported pain, stress, and anxiety are higher among patients who received an opiate in the ED than in those who did not, but only pain is higher among patients who received a discharge prescription for an opioid. Methods: This was a prospective, randomized crossover study on the use of GVL and DL by incoming pediatric interns prior to advanced life support training. At the start of the study, the interns received a didactic session and expert modeling of the use of both devices for intubation. Two scenarios were used: (1) normal intubation with a standard airway and (2) difficult intubation with tongue edema and pharyngeal swelling. Interns then intubated Laerdal SimBaby in each scenario with both GVL and DL for a total of four randomized intubation scenarios. Primary outcomes included time to successful intubation and the rate of successful intubation. The interns also rated their satisfaction with the devices using a visual analog scale (0-10) and chose their preferred device for their next intubation. Results: 29 interns were included in this study. In the normal airway scenario, there were no differences in the mean time for intubation with GVL or DL (62.9 ± 24.1 vs 61.8 ± 26.2 seconds, p = NS) or the number of interns who performed successful intubation (23 vs 22, p = NS). In the difficult airway scenario, the interns took longer to intubate with GVL than DL (92.3 ± 26.6 vs 59.9 ± 22.7 seconds, p = 0.008), but there were no differences in the number of successful intubations (17 vs 19, p = NS). Interns rated their satisfaction higher for GVL than DL (7.3 ± 1.8 vs 6.5 ± 1.5, p = 0.05) and GVL was chosen as the preferred device for their next intubation by a majority of the interns (19/29, 66%). Conclusion: For novice clinicians, GVL does not improve the time to intubation or intubation success Objectives: To determine the time to intubation, the number of attempts, and the occurrence of hypoxia, in patients intubated with a C-MAC device versus those intubated using a standard laryngoscope. Methods: Randomized controlled trial using exception from informed consent that included patients undergoing endotracheal intubation with a standard laryngoscope at an urban Level I trauma center. Eligible patients were randomized to undergo intubation using the C-MAC or standard laryngoscopy. Standard laryngoscopy was performed using a C-MAC device laryngoscope with the video output obstructed to ensure equivalent laryngoscope blades in the two groups. Data were collected by a trained research assistant at the patient's bedside and video review by the investigators. The number of attempts made, the initial and lowest oxygen saturation (SpO 2 ), and the total time until the intubation was successful was recorded. Hypoxia was defined as an oxygen saturation <93%. Data were compared with Wilcoxon rank sum and chi-square tests. Results: Thirty-eight patients were enrolled, 20 (70% male, median age 58, range 28 to 86, median SpO 2 97%, range 79 to 100) in the standard laryngoscopy group and 18 (67% male, median age 58, range 19 to 73, median SpO 2 96.5%, range 78 to 100) in the C-MAC group. The median number of attempts for standard laryngoscopy was 1, range 1 to 3, and for C-MAC was 1, range 1 to 2 (p = 0.43). The median time to intubation for the standard laryngoscopy group was 54 seconds (range 7 to 89) and for the C-MAC group was 41 seconds (range 4 to 101)(p = 0.05). Hypoxia was detected in 5/20 (20%) in the standard laryngoscopy group and 1/18 (6%) in the C-MAC group (p = 0.15). The median decrease in oxygen saturation during the attempt was 5.4% (range 0% to 31%) for the standard laryngoscopy group and 2.3% (range 0% to 16%) for the C-MAC group. Conclusion: We did not detect a difference in number of attempts, the occurrence of hypoxia, or the diagnosis of aspiration pneumonia between standard laryngoscopy and the C-MAC. The time to successful intubation was shorter for patients intubated with the C-MAC. The C-MAC device appears to be superior to standard laryngoscopy for emergent endotracheal intubation. (Originally submitted as a ''late-breaker.'') The Background: Aspiration pneumonia is a complication of endotracheal intubation that may be related to the difficulty of the airway procedure. Objectives: To determine the association of the device used, the time to intubation, the number of attempts to intubate, and the occurrence of hypoxia with the subsequent development of aspiration pneumonia. Methods: This was a prospective observational study of patients undergoing endotracheal intubation by emergency physicians at an urban Level I trauma center conducted from 7/1/2010 until 11/1/2011. The device used on the initial attempt to intubate was at the discretion of the treating physician. Data were collected by a trained research assistant at the patient's bedside. The device used, the number of attempts made to intubate, the lowest oxygen saturation during the attempt, and the total time until intubation was successfully accomplished were recorded. Patient's medical records were reviewed for the subsequent diagnosis of aspiration pneumonia. Hypoxia was defined as an oxygen saturation <93%. Data were analyzed using multinomial logistic regression and odds ratios (OR). Results: 654 patients were enrolled; 141 (22%) subsequently developed aspiration pneumonia. 328 were intubated with a standard laryngoscope (SL), 277 using the C-MAC, 26 with an intubating laryngeal mask, and 23 with nasotracheal intubation (NI) (OR 0.87, 95% CI = 0.70-1.06). Comparison of individual devices versus SL did not show an association by device type. The median number of attempts for patients with aspiration pneumonia was 1, range 1 to 3, and for those without was 1, range 1 to 9 (OR 0.78, 95%CI = 0.43-1.38). The median time to intubation for patients who developed aspiration pneumonia was 55 seconds (range 4 to 756) and for those who did not was 54 seconds (range 4 to 721)(OR 1.00, 95%CI = 0.99-1.00). Hypoxia during intubation was detected in 53/141 (38%) in the aspiration pneumonia group and 175/513 (34%) in the no aspiration pneumonia group (OR 1.06, 95% CI = 0.65-1.72). Conclusion: There was not an association between the device used, the number of attempts, the time to intubation, or the occurrence of hypoxia during the intubation, and the subsequent occurrence of aspiration pneumonia. Background: Japanese census data estimate that 35 million, or nearly 29% of the overall population, will be over age 65 by the year 2020. Similar trends are apparent throughout the developed world. Although increased patient age affects airway management, comprehensive information in emergency airway management for the elderly is lacking. Objectives: We sought to characterize emergency department (ED) airway management for the elderly in Japan including success rate, and major adverse events using a large multi-center registry. Methods: Design and Setting: We conducted a multicenter prospective observational study using the Japanese Emergency Airway Network (JEAN) registry of EDs at 11 academic and community hospitals in Japan between 2010 and 2011 inclusive. Data fields included ED characteristics, patient and operator demographics, methods of airway management, number of attempts, success rate, and adverse events. Participants: Patient inclusion criteria were all adult patients who underwent emergent tracheal intubation in the ED. Primary analysis: Patients were divided to into two groups defined as follows: 18 to 64 years old and over 65 years old. We describe primary success rates and major adverse events using simple descriptive statistics. Categorical data are reported as proportions and 95% confidence intervals (CIs). Results: The database recorded 2710 patients (capture rate 98%) and 2623 met the inclusion criteria. Of 2623 patients, 1104 patients were 18 to 64 years old (62%) and 1519 were over 65 years old (38%). The older group had a significantly higher success rate at first attempt intubation (1074/1519; 70.7%, 95% CI 68.8-72.6%) compared with the younger group (710/1104; 64.3%, 95% CI 61.9-66.7%). The older group had similar major adverse event rates (112/1519; 7.4%, 95% CI 6.3-8.5%) compared with the younger group (83/1104; 7.5%, 95% CI 6.2-8.8%). (See table 1) Background: The degree to which a patient's report of pain is associated with changes in blood pressure, heart rate, and respiratory rate is not known. Objectives: To determine to what degree a standardized painful stimulus effects a change in systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate (HR), or respiratory rate (RR), and compare changes in vital signs between patients based on pain severity. Methods: Prospective observational study of healthy human volunteers. Subjects had their SBP, DBP, HR, and RR measured prior to pain exposure, immediately after, and 10 minutes after. Pain exposure consisted of subjects placing their hand in a bath of 0 degree water for 45 seconds. The bath was divided into two sections; the larger half was the reservoir of cooled water monitored to be 0 degrees, the other half filled from constant overflow over the divider. Water drained from this section into the cooling unit and was then pumped up into the base of the reservoir through a diffusion grid. Subjects completed a 100 mm visual analog scale (VAS) representing their perceived pain during the exposure and graded their pain as minimal, moderate or severe. Data were compared using 95% confidence intervals. Results: 90 subjects were enrolled, mean pain VAS 40 mm, range 0 to 77, 49 reported mild pain, 41 moderate pain, and 0 severe pain. The percent change from baseline in vital signs during the exposure and 10 minutes after are presented in the table. Conclusion: There was a wide variety in reported pain among subjects exposed to a standard painful stimulus. There was a larger change in heart rate during the exposure among subjects who described a standardized painful exposure as moderate than in those who described it as severe. The small observed changes in blood pressure and respiratory rate seen during the exposure did not differ by pain report or persist after 10 minutes. Background: Vital signs are often used to validate intensity of pain. However, few studies have looked at the capacity of vital signs to estimate pain intensity, particularly in patients with a diagnosis that a majority of physicians would agree produce significant pain in the ED. Objectives: To determine the association between pain intensity and vital signs in consecutive ED patients and in a sub-group of patients with diagnosis known to cause significant pain. Methods: We performed a post-hoc analysis of prospectively acquired data in a cohort study done in an urban teaching hospital with computerized triage and nurses records. We included all consecutive ED adult patients ( ‡16 years old), who had any level of pain intensity measured during triage, from March 2008 to November 2010. The primary outcome was the mean heart rate, systolic and diastolic blood pressure for every pain intensity level from 1 to 10 on a verbal numerical scale. Our secondary outcomes where the same but limited to patients with the following diagnosis: fracture, dislocation, and renal colic. We performed descriptive statistics, one-way and two-way ANOVAs when appropriate. Results: During our study period, 42,947 patients ‡16 years old where triaged with a pain intensity of at least 1/10 and 3939 had a diagnosis known to cause significant pain. 56.5% of patients were female, with a mean pain intensity of 6.8/10, mean age of 47.9 years (±19.3), and 22.3% were ‡65 years old. There was a statistically significant difference (P < 0.05) in mean heart rate, systolic and diastolic blood pressure for each level of pain intensity, ex: difference between 1/10 and 10/10 for mean heart rate was 3.9 beats per minutes, for systolic pressure was 4.0 mmHg and for diastolic 4.5 mmHg. Results are similar for painful diagnosis: difference for mean heart rate was 0.3 beats per minutes, for systolic pressure was 6.5 mmHg and diastolic 8.8 mmHg. However, these differences are not clinically significant. Conclusion: Although our study is a post hoc analysis, pain intensity, heart rate, systolic and diastolic pressures during triage are usually reliable data and a prospective study would likely produce the same result. These vital signs cannot be used to estimate or validate pain intensity in the emergency department. 8% had a positive urine drug screen. Logistic multivariate regressions analyses revealed the following factors to be significantly associated with the risk of having an abnormal head CT: association with seizure (P = 0.0072); length of time of loss of consciousness, ranging from none to 0-30 min to >30 min (P = 0.0013); alteration of consciousness (P = 0.00009); post-traumatic amnesia (P = 0.0132); alcohol intake prior to injury (P = 0,0003); and initial ED GCS (P = 0.0255). Conclusion: In an emergency department cohort of patients with traumatic brain injury, symptoms including loss of or alteration in consciousness, seizure, post traumatic amnesia, and alcohol intake appear to be significantly associated with abnormal findings on head CT. These clinical findings on presentation may be useful in helping triage head injury patients in a busy emergency department, and can further define the need for urgent or emergent imaging in patients without clearly apparent injuries. Background: The etiology of neurogenic shock is classically attributed to diminished peripheral vascular resistance (PVR) secondary to loss of sympathetic outflow to the peripheral vasculature. However, the sympathetic nervous system also controls other key elements of the cardiovascular system such as the heart and capacitance vessels and disruptions in their function could complicate the hemodynamic presentation. Objectives: We sought to systematically examine the hemodynamic profiles of a series of trauma patients with neurogenic shock. Methods: Consecutive trauma patients with documented spinal cord injury complicated by clinical shock were enrolled. Hemodynamic data including systolic and diastolic blood pressure, heart rate (HR), impedance-derived cardiac output, pre-ejection period (PEP), left ventricular ejection time (LVET), and calculated systemic PVR were collected in the ED. Data were normalized for body surface area and a validated integrated computer model of human physiology (Guyton model) was used to analyze and categorize the hemodynamic profiles based on etiology of the hypotension using a systems analysis. Correlation between markers of sympathetic outflow (HR, PEP, LVET) and shock etiology category was examined. Results: Of 9 patients with traumatic neurogenic shock, the etiology of shock was decrease in PVR in 4 (45%; 95% CI 19 to 73%), loss of vascular capacitance in 3 (33%; 12 to 65%), and mixed peripheral resistance and capacitance responsible in 2 (22%; 6 to 55%). The markers of sympathetic outflow had no correlation to any of the elements in the patients' hemodynamic profiles. Conclusion: Neurogenic shock is often considered to have a specific well-characterized pathophysiology. Results from this study suggest that neurogenic shock can have multiple mechanistic etiologies and represents a spectrum of hemodynamic profiles. This understanding is important for the treatment decisions made in the management of these patients. -year (2008-2010) , pre-post intervention study of trauma patients requiring massive blood transfusion was performed. We divided the population into two cohorts: a pre-protocol group (PRE) which included trauma patients receiving MBT not aided by a protocol, and a post-protocol group (POST) who underwent MBT via the MBTP. Patient demographics, 24hour blood component totals, timing of blood component delivery, trauma Injury Severity Score (ISS), initial Glasgow Coma Scale (GCS) score, trauma mechanism, and patient mortality data were collected and analyzed using Fisher's exact tests, Student's t-tests, and Mann-Whitney U tests. Results: Fifty-two patients were included for study. Median times to delivery of first products were reduced for PRBCs (4 minutes), FFP (16 minutes), and platelets (33 minutes) between the PRE and POST cohorts. Median time to delivery of any subsequent blood product was significantly reduced (10 minutes) in the POST cohort (p = 0.024). The median number of blood products delivered was increased by 5.5 units for PRBCs, 4 units for FFP, 0.5 units for platelets, and 1 unit for cryoprecipitate after implementation of MBTP. The percentage of patients receiving higher blood product ratios (>3:1) was reduced between the PRE and POST cohorts for PRBC to FFP (25% reduction) and PRBC to platelet ratio groups (7 % reduction). Despite improved transfusion timing and ratios, we found no significant difference in mortality (p = 0.129) between PRE and POST cohorts when we adjusted for injury severity. Conclusion: Protocolized delivery of massive blood transfusion might reduce time to product availability and delivery, though it is unclear how this affects patient mortality in all US trauma centers. Background: Burns are common injuries that can result in significant scarring leading to poor function and disfigurement. Unlike mechanical injuries, burns often progress both in depth and size over the first few days after injury, possibly due to inflammation and oxidative stress. A major gap in the field of burns is the lack of an effective therapy that reduces burn injury progression. Objectives: Since mesenchymal stem cells (MSC) have been shown to improve healing in several injury models, we hypothesized that species-specific MSC would reduce injury progression in a rat comb burn model. Methods: Using a 150 gm brass comb preheated to 100 degrees Celsius, we created four rectangular burns, separated by three unburned interspaces on both sides of the backs of male Sprague-Dawley rats (300 g). The interspaces represented the ischemic zones surround-ing the central necrotic core. Left untreated, most of these interspaces become necrotic. In an attempt to reduce burn injury progression, 20 rats were randomized to tail vein injections of 1 ml rat-specific MSC 10 6 cells/ml (n = 10) or normal saline (n = 10) 60 minutes after injury. Tracking of the stem cells was attempted by injecting several rats with Quantum dot-labeled MSC. Results: By four days post-injury, all of the interspaces in the control rats (54/54, 100%) became necrotic while in the experimental group, 29/48 (60%) of the interspaces became necrotic (Fisher's exact test; P < 0.001). At 7 days, the percentage of the unburned interspaces that became necrotic in the MSC treated group was significantly less than in the control group (80% vs. 100%, p < 0.0001). We were unable to identify any Quantum dot labeled MSC in the injured skin. No adverse reactions or wound infections were noted in rats injected with MSC. Conclusion: Intravenous injection of rat MSC reduced burn injury progression in a rat comb burn model. Although basic demographics of bicyclists in accidents have been described, there is a paucity of data describing the street surface involved in accidents, and whether designated bicycle roadways offer protection. This lack of information limits informed attempts to change infrastructure in a way that will decrease morbidity and/or mortality of cyclists. Objectives: To identify road surface types involved in pedal cyclist injuries and determine the relationship between injury severity and the use of designated bicycle roadways (DBR) versus non-designated roadways (NDR). We hypothesized that more severe injuries would happen at intersections regardless of DBR versus NDR. Methods: This retrospective cohort study reviewed the trauma database from a Level I trauma center in Tucson, AZ. We identified all bicyclists in the database injured in accidents involving a motor vehicle from January 1, 2009 1, through December 31, 2009 . The patients were then linked to a local government database that documents location (latitude/longitude) and direction of travel of the cyclist. Seventy-eight total incidents were identified and categorized as occurring on a DBR versus NDR and occurring at an intersection versus not at an intersection. Results: Only one patient who arrived at the trauma center died. Fifty-one of the accidents (65%) occurred on DBRs; 63% of accidents occurring on DBRs took place in intersections. Conversely, 63% of accidents on NDRs occurred outside of intersections. The odds of an injury occurring at an intersection versus not at an intersection were 2.9 times higher (95% CI: 1.0-8.5) for DBRs compared to NDRs. The odds of a trauma being severe (admitted) versus not severe (discharged home) were 2.7 times higher (95% CI: 0.9-8.7) when a collision occurred not at an intersection versus at an intersection. Conclusion: Contrary to our hypothesis, in this study group severe injuries were more likely outside of an intersection. However, intersections on DBRs were identified as problematic as cyclists on a DBR were more likely to be injured in an intersection. Future city planning could target improved cyclist safety in intersections. Background: Minor thoracic injury (MTI) is frequent and a significant proportion will still have moderate to severe pain at 90 days. There is a lack of risk factors to orient specific treatment at ED discharge. Objectives: To determine risk factors of having pain ( ‡3/10, on a numerical intensity pain score from 0 to 10) at 90 days in a population of minor thoracic injury patients discharged from the ED. Methods: A prospective multi-center cohort study was conducted in four Canadian EDs, from November 2006 to January 2010. All consecutive patients, 16 years and older, with MTI (with or without rib fracture), a normal chest x-ray, and discharged from the ED were eligible. A standardized clinical and radiological evaluation was done at 1 and 2 weeks. Standardized phone interviews were done at 30 and 90 days. Pain evaluation occurred at five time points (ED visit, 1 and 2 weeks, 30 and 90 days). Using a pain trajectory model (SAS), we planned to identify groups with different pain evolution at 90 days. The final model was based on the importance of difference in pain evolution, confidence intervals, and number of patients in each group. To judge the adequacy of the final model, we examined whether the posteriori probabilities (i.e., a participant's probability of belonging to a certain trajectory group) averaged at least 70% for each trajectory group. Then using logistic multinomial regression and the low risk group of having pain as the control group, we identified significant predictors of patients in the moderate and high risk groups having pain at 90 days. Results: In our cohort of 1,057 patients, 1,025 had an evaluation at 90 days. We identified three groups at low (34%), moderate (50.6%), and high risk (15.4%) of having pain ‡3/10 at 90 days. Using risk factor identified by univariate analysis, we created a model to identify patients at risk containing the following predictors: age ‡ 30 years old, women, current smoker, two or more rib fractures, complaint of dyspnea, and saturation <95% at initial visit. Posteriori probabilities for low, moderate, and high risk were 76%, 74%, and 88%. Conclusion: To our knowledge, this is the first study to identify potential risk factor for having pain at 90 days after minor thoracic injury. These risk factors should be validated in a prospective study to guide specific treatment plan. The Use of Ultrasound to Evaluate Traumatic Optic Neuropathy Benjamin Burt, Lisa Montgomery, Cynthia Garza Meissner, Sanja Plavsic-Kupesic, Nadah Zafar TTUHSC -Paul L Foster School of Medicine, El Paso, TX Background: Whenever head trauma occurs, there is the possibility for a patient to have an optic nerve injury. The current method to evaluate optical nerve swelling is to look for proptosis. However, by the time proptosis presents, significant damage has already occurred. Therefore, there is a need to establish a method to evaluate nerve injury prior to the development of proptosis. Objectives: Fundamental to understanding the pathophysiology of optic nerve injury and repair is an understanding of the optic nerve's temporal response to trauma including blood flow changes and vascular reactivity. The aim of our study was to assess the dependability and reproducibility of ultrasound techniques to sequence optic nerve healing and monitor the vascular response of the ophthalmic artery following an optic nerve crush. Methods: The rat's orbit was imaged prior to and following a direct injury to the optic nerve, at 72 hours and at 28 days. 3D, 2D, and color Doppler techniques were used to detect blood flow and the course of the ophthalmic artery and vein, to evaluate the course and diameter of the optic nerve, and to assess the extent of optic nerve trauma and swelling. The parameters used to evaluate healing over time were pulsatility and resistance indices of the ophthalmic artery. Results: We have established baseline ultrasound measurements of the optic nerve diameter, normal resistance and pulsatility indices of the ophthalmic artery, and morphological assessment of the optic nerve in a rat model. Longitudinal assessment of 2D and 3D ultrasound parameters were used to evaluate vascular response of the ophthalmic artery to optic nerve crush injury. We have developed a rat model system to study traumatic optic nerve injury. The main advantages of ultrasound are low cost, non-invasiveness, lack of ionizing radiation, and the potential to perform longitudinal studies. Our preliminary data indicate that 2D and 3D color Doppler ultrasound may be used for the evaluation of ophthalmic artery and total orbital perfusion following trauma. Once baseline ultrasound and Doppler measurements are defined there is the opportunity to translate the rat model to evaluate patients with head trauma who are at risk for optic nerve swelling and to assess the usefulness of treatment interventions. Background: Alcoholism is a chronic disease that affects an estimated 17.6 million American adults. A common presentation to the emergency department (ED) is a trauma patient with altered sensorium who is presumed to be alcohol intoxicated by the physicians based on their olfactory sense. Often ED physicians may leave patients suspected of alcohol intoxication aside until the effects wear off, potentially missing major trauma as the source of confusion or disorientation. This practice often results in delays in diagnosing acute potentially life-threatening injuries in the patients with presumed alcohol intoxication. Objectives: This study will determine the accuracy of physicians' olfactory sense for diagnosing alcohol intoxication. Methods: Patients suspected of major trauma in the ED underwent an evaluation by the examining physician for the odor of alcohol as well as other signs of intoxication. Each patient had determination of blood alcohol level. Alcohol intoxication was defined as a serum ethanol level ‡80 mg/dl. Data were reported as means with 95% confidence intervals (95% CI) or proportions with inter-quartile ranges (IQR 25%-75%). Results: One hundred and fifty one patients (70% males) were enrolled in the study, median age 45 years (IQR 33-56). The median score for Glasgow Coma Scale was 15. The level of training of examining physician was a median of PGY 4 (IQR PGY 3 -Attending). Prevalence of alcohol intoxication was 43% (95% CI: 35% to 51%). Operating characteristics: Physician assessment of alcohol intoxication, sensitivity 84% (95% CI: 73% to 92%), specificity 87% (95% CI: 78% to 93%), positive likelihood ratio 6.6 (95% CI: 3.8 to 11.6), negative likelihood ratio 0.18 (95% CI: 0.1 to 0.3), and accuracy 86% (95% CI: 80% to 91%). Patients who were falsely suspected of being intoxicated were 7.3% (95% CI: 4% to 13%). Conclusion: Although the physicians had a high degree of accuracy in identifying patients with alcohol intoxication based on their olfactory sense, they still falsely overestimated intoxication in a significant number of non-intoxicated trauma patients. The Background: Optimal methods for education and assessment in emergency and critical care ultrasound training for residents are not known. Methods of assessment often rely on surrogate endpoints which do not assess the ability of the learner to perform the imaging and integrate the imaging into diagnostic and therapeutic decisions. We designed an educational strategy that combines asynchronous learning to teach imaging skills and interpretation with a standardized assessment tool using a novel ultrasound simulator to assess the learner's ability to acquire and interpret images in the setting of a standardized patient scenario. Objectives: To assess the ability of emergency medicine and surgical residents to integrate and apply information and skills acquired in an asynchronous learning environment in order to identify pathology and prioritize relevant diagnoses using an advanced cardiac ultrasound simulator. Methods: 12 EM R2 residents and 12 R2 surgical residents completed an online focused training program in cardiac ultrasonography (ICCU eLearning, https:// www.caeiccu.com/lms). This consisted of approximately 14 hours of intensive training in cardiac ultrasound. Residents were then given cases with a patient scenario that lacked significant details that would suggest a specific diagnosis. The resident was then given a list of 17 possible diagnoses and asked to rank the top five diagnoses in order of most likely to least likely. Each resident (blinded to the pathology displayed by the simulator) then imaged using an ultrasound simulator. After imaging, the residents were given the same list of potential diagnoses, and asked to rank them again from 1-5. Results: Overall, residents ranked the correct diagnosis in the top five significantly more times post-ultrasound than pre-ultrasound. Additionally, the residents made the correct diagnosis significantly more times postultrasound than pre-ultrasound. Similar patterns occur for congestive heart failure, pericardial effusion with tamponade, and pleural effusion. There was no significant difference pre-and post-ultrasound for pulmonary embolism and anterior infarction. Conclusion: An asynchronous online learning program significantly improves the ability of emergency medicine and surgical residents to correctly prioritize the correct diagnosis after imaging with a standardized pathology imaging simulator. Mark Favot, Jacob Manteuffel, David Amponsah Henry Ford Hospital, Detroit, MI Background: EM clerkships are often the only opportunity medical students have to spend a significant amount of time caring for patients in the ED. It is imperative that students gain exposure to as many of the various fields within EM as possible during this time. If the exposure of medical students to ultrasound is left to the discretion of the supervising physicians, we feel that many students would complete an EM clerkship with limited skills and knowledge in ultrasound. The majority of medical students receive no formal training in ultrasound during medical school and we believe that the EM clerkship is an excellent opportunity to fill this educational gap. Objectives: Evaluate the usefulness and effectiveness of a focused ultrasound curriculum for medical students in an EM clerkship at a large, urban, academic medical center. Methods: Prospective cohort study of fourth year medical students doing an EM clerkship. As part of the clerkship requirements, the students have a portion of the curriculum dedicated to the FAST exam and ultrasound-guided vascular access. At the end of the month they take a written test, and 1 month later they are given a survey via e-mail regarding their ultrasound experience. EM residents also completed the test to serve as a comparison group. All data analysis was done using SAS 9.2. Scores were integers ranging between 0 and 10. Descriptive statistics are given as count, mean, standard deviation, median, minimum, and maximum for each group. Due to non-Gaussian nature of the data and small group sizes, a Wilcoxon two-sample test was used to compare the distributions of scores between the groups. Results: In the table, the distribution of scores was compared between the residents (controls) and the students (subjects). The mean and median scores of the student group were higher than those of the resident group. The difference in scores between the two groups was statistically significant (p = 0.021). Conclusion: Our data reveal that after completing an EM clerkship with time devoted to learning ultrasound for the FAST exam and vascular access, fourth year medical students are able to perform better than EM residents on a written test. What remains to be determined is if their skills in image acquisition and in performance of ultrasound-guided vascular access procedures also exceed those of EM residents. Results: There were 106 respondents (total response rate 24.71%). Compared to non-EM students, students pursuing EM (8 students, 7.55%) were more drawn to their specialty for work hour control (p < 0.0009) and shorter residency length (p < 0.0338). EM students were less likely than non-EM students to be drawn to their chosen specialty for future academic opportunities (p < 0.0085). EM students formed their mentorships by referral significantly more than non-EM students (p < 0.0399), though there was no statistical difference in quality of existing mentorships amongst students. Of the 93 students not currently and never formerly interested in EM, the most common response (25.8%) for why they did not choose EM was the lack of a strong mentor in the field. Conclusion: The results confirmed previous findings of lifestyle factors drawing students to EM. Future academic opportunities were less likely to draw students to EM than students pursuing other specialties. Lack of mentorship in the field was the most common reason given for why students did not consider EM. Given the lack of direct EM exposure until late in the curriculum of most medical schools, mentorship may be particularly important for EM and future study should focus on this area. Background: Misdiagnosis is a major public health problem. Dizziness leads to 10 million visits annually in the US, including 2.6 million to the emergency department (ED). Despite extensive ED workups, diagnostic accuracy remains poor, with at least 35% of strokes missed in those presenting with dizziness. ED physicians need and want support, particularly in the best method for diagnosis. Strong evidence now indicates the bedside oculomotor exam is the best method of differentiating central from peripheral causes of dizziness. Objectives: After a vertigo day that includes instruction in head impulse testing, emergency medicine residents will feel comfortable discharging a patient with signs of vestibular neuritis and a positive head impulse test without ordering a CT scan. Methods: Post graduate year 1-4 emergency medicine residents participated in a four hour vertigo day. We developed a mixed cognitive and systems intervention with three components: an online game that began and ended the day, a didactic taught by Dr. Newman-Toker, and a series of small group exercises. The small group sessions included the following: a question and answer session with the lecturer; vertigo special tests (cerebellar assessment, Dix Hall-Pike, Epley maneuver); a head impulse hands-on tutorial using a mannequin; and a video lecture on other tests useful in vertigo evaluation (nystagmus, test of skew, vestibulocular reflex, ataxia). Results: Thirty emergency medicine residents were studied. Before and after the intervention the residents were given a survey in which one question asked ''In a patient with acute vestibular syndrome and a history and exam compatible with vestibular neuritis, I would be willing to discharge the patient without neuroimaging based on an abnormal head impulse test result that I elicited''. Resident answers were based on a sevenpoint Likert scale from strongly agree to strongly disagree. Twenty-five residents completed both surveys. Of the seven residents who changed their responses pre to post,a significant proportion (100%) changed their answer from disagree/neutral to agree after a 4hour vertigo day (McNemar's Test, p value = 0.0082). Conclusion: In this single-center study, teaching headimpulse testing as part of a vertigo day increases resident comfort with discharging a patient with vestibular neuritis without a CT scan. Background: Previous studies have been inconsistent in determining the effect of increased ED census on resident workload and productivity. We examined resident workload and productivity after the closure of a large urban ED near our facility, which resulted in a rapid 21% increase in our census. Objectives: We hypothesized that the closure of a nearby hospital closure with a resulting influx of ED patients to our facility would not change resident productivity. Methods: This computer-assisted retrospective study compared new patient workups per hour and patient load before and after the closure of a large nearby hospital. Specifically, new patient workups per hour and the 4 pm patient census per resident were examined for a one-year period in the calendar year prior to the closing and also for one year after the closing. We did not include the four month period surrounding the closure in order to determine the long-term overall effect. Background: Emergency medicine residents use simulation for training due to multiple factors including the acuity of certain situations they are faced with, and the rarity of others. Current training on highfidelity mannequin simulators is often critiqued by residents over the physical exam findings present, specifically the auscultatory findings. This detracts from the realism of the training, and may also lead a resident down a different diagnostic or therapeutic pathway. Wireless remote programmed stethoscopes represent a new tool for simulation education which allows any sound to be wirelessly transmitted to a stethoscope receiver. Objectives: Our goal was to determine if a wireless remote programmed stethoscope was a useful adjunct in simulation-based cases using a high-fidelity mannequin. Our hypothesis was that this would represent a useful adjunct in simulation education of emergency medicine residents. Methods: Starting June 2011, PGY1-3 emergency medicine residents were assessed in two simulation-based cases using pre-determined scoring anchors. An experimental randomized crossover design was used in which each resident performed a simulation case with and without a remote programmed stethoscope on a highfidelity mannequin. Scoring anchors and surveys were used to collect data with differences of means calculated. Results: Fourteen residents participated in the study. Residents noted most realistic physical exam findings associated with the case with the adjunct in 13/14 (93%) and that their preference was for the use of the adjunct in 13/14 (93%). Based off of a five-point Likert scale, with 5 being the most realistic, the adjunct-associated case averaged 4.4 as compared to 3.0 without (difference of means 1.4, p = 0.00017). Average scores of residents with the adjunct were 2.5/3 with the use of the adjunct and 2.3/3 without (difference of means 0.2, p = 0.076). Average total times were 28:49 with the adjunct as compared to 30:02 without. Conclusion: A wireless remote programmed stethoscope is a useful adjunct in simulation training of emergency medicine residents. Residents noted physical exam findings to be more realistic, preferred its use, and had approached significant improvement of scores when using the adjunct. Background: Prior studies predict an ongoing shortage of emergency physicians to staff the nation's EDs, especially in rural areas. To address this, EM organizations have discussed broadening access to ACGME or AOA accredited EM residency programs to physicians who previously trained in another specialty and focusing on physicians already practicing in rural areas. Objectives: To investigate whether EM program directors (PDs) from allopathic and osteopathic residency programs would be willing to accept applicants previously trained in other specialties and whether this willingness is modified by applicants' current practice in rural areas. Methods: A five-question web-based survey was sent to 200 U.S. EM PDs asking questions about their policies on accepting residents with past training and from rural practices. Questions included whether a PD would accept a resident with prior training in other specialties, how many years from this training would the applicant be still a competitive candidate and if a physician was practicing in a rural region would the likelihood of acceptance to the program be improved. Different characteristics of the residency programs were recorded including length of program, years in existence, size, type, and location of program. We compared responses by program characteristics using chi-square test. Results: Of the 96 (48%) PDs responding to date, a large majority (87%) reported they do accept applicants with previous residency training, although directors of osteopathic programs were less likely to accept these applicants (56% vs 94% for allopathic; p < 0.001). Overall, 28% of PDs reported no limit on the length of time from prior training to when they are accepted at an EM program. 73% reported it is very or possibly realistic they would accept a candidate who had completed training and was board certified in another specialty. A majority of all respondents (61%) felt a physician practicing in a rural setting might be viewed as a more favorable candidate, even if the resident would only be in the program for 2 years after receiving training credit. Directors of newer programs (<5 years of existence) were more likely to view these candidates favorably than older programs (91% vs 53%; p = 0.02). Conclusion: There appear to be many EM residency programs that would at least review the application and consider accepting a candidate who trained in another specialty. A Qualitative Assessment of Emergency Medicine Self-Reported Strengths Todd Guth University of Colorado, Aurora, CO Background: Self-reflection has been touted as a useful way to assess the ACGME core competencies. Objectives: The purpose of this study is to gain insight into resident physician professional development through analysis of self-perceived strengths. A secondary purpose is to discover potential topics for selfreflective narrative essays relating to the ACGME core competencies. Methods: Design: A small qualitative study was performed to explore the self-reported strengths of emergency medicine (EM) residents in a single four-year residency. Participants: All 54 residents regardless of year of training were also asked to report their selfperceived strengths. Observations: Residents were asked: ''What do you feel are your greatest strengths as a resident? Provide a quick description.'' The author and another reviewer identified themes from within each year of residency with Abraham Maslow's conscious competence conceptual framework in mind. Occurrences of each theme were counted by the reviewers and organized according to frequency. Once the top ten themes for each year of residency were identified and exemplar quotes identified, the two reviewers identified trends. Inter-rater agreements were calculated. Results: Representing unconscious incompetency, the first trend was the reported presence of ''enthusiasm and a positive attitude'' from residents early in their training that decreases further along in training. Additionally, a ''willingness and motivation to improve and learn'' was reported as a strength throughout all the years of training but most frequently reported in the first two years of residency. Entering into conscious incompetence, the second trend identified was ''recognition of limitations and openness to constructive feedback'' that was mentioned frequently in the second and third years of residency. Demonstrating conscious competence, the third trend identified was the increase in identification of the strengths of ''educational leadership, teamwork skills and communication, and departmental patient flow and efficiency'' in the later years of residency. Conclusion: Self-reported strengths has helped to identify both themes within each year of residency and trends among the years of residency that can serve as areas to explore in self-reflective narratives relating to the ACGME core competencies. training. POFU can also be used to assess the ACGME Core Competency of Practice-Based Learning. The exact form or frequency of POFU assessment among various EM residencies, however, is not currently known. Objectives: We aimed to survey EM residencies across the country to determine how they fulfill the POFU requirement and whether certain program structure variables were associated with different POFU systems. We hypothesized that implementation of POFU systems among EM residencies would be highly variable. Methods: In this IRB-approved study, all program directors of ACGME allopathic EM residencies were invited to complete a 10-question survey on their current approaches to POFU. Respondents were asked to describe their current POFU system's characteristics and rate its ease of use, effectiveness, and efficiency. Data were collected using SurveyMonkey(TM) and reported using descriptive statistics. Results: Of 158 residencies surveyed, 81 (51%) submitted complete data. 77.5% were completed by program directors and over three-fourths (76.1%) of EM residencies require monthly completion of POFUs. The mean total POFUs required per year was 78 (95% CI 58-98), with a median of 64 and a range of 2-400. Almost 2/3 (63%) of residencies use an electronic POFU system. Most (84%) 4-year EM residencies use an electronic POFU system, compared with half (54%) of 3-year residencies (difference 30%, p = 0.025, 95% CI 5.1%-47.2%). Seven commercially available electronic programs are used by 71% of the residencies, while 29% use a customized product. Most respondents (88%) rated their POFU system as easy to use, but less than half (49%) felt it was an effective learning tool or an efficient one (45%). Onethird (34%) would use a different POFU system if available, and almost half (44%) would be interested in using a multi-residency POFU system. Conclusion: EM residency programs use many different strategies to fulfill the RRC requirement for POFU. The number of required POFUs and the method of documentation vary considerably. About two-thirds of respondents use an electronic POFU system. Less than half feel that POFU logs are an effective or efficient learning tool. Background: Certification of procedural competency is requisite to graduate medical education. However, little is known regarding which platforms are best suited for competency assessment. Simulators offer several advantages as an assessment modality, but evidence is lacking regarding their use in this domain. Furthermore, perception of an assessment environment has important influence on the quality of learning outcomes, and procedural skill assessment is ideally conducted on a platform accepted by the learner. Objectives: To ascertain if a simulator performs as well as an unembalmed cadaver with regard to residents' perception of their ability to demonstrate procedural competency during ultrasound (US) guided internal jugular vein (IJ) catheterization. Methods: In this cross-sectional study at an urban community hospital during July of 2011, 15 residents in their second or third year of training from a 3-year EM residency program performed US guided catheterizations of the IJ on both an unembalmed cadaver and a simulator manufactured by Blue Phantom. After the procedure, residents completed an anonymous survey ascertaining how adequately each platform permitted their demonstration of proficiency on predefined procedural steps. Answers were provided on a Likert scale of 1 to 10, with 1 being poor and 10 being excellent. P values < 0.10 were considered educationally significant. Results: The median overall rating of the simulator (S) to serve as an assessment platform was similar to that of the cadaver (C) with scores of 8.0 and 8.3 respectively, p = 0.89. Median ratings for permitting the demonstration of specific procedural steps were as follows: Conclusion: Senior EM residents positively rate the Blue Phantom simulator as an assessment platform and similarly to that of a cadaver with regard to permitting their demonstration of procedural competency for US guided IJ catheterization, but did prefer the cadaver to a greater degree when identifying and guiding the needle into the IJ. Methods: In Fall 2011, WCMC and WCMC-Q students taking the course completed a 20 question pre-and post-test. WCMC-Q students also completed a postcourse single-station Objective Structured Clinical Examination (OSCE) that evaluated their ability to identify and perform eight actions critical for a first responder in an emergency situation (Table 1) . Results: On both campuses, mean post-test scores were significantly higher than mean pre-test scores (p £ 0.001). On the pre-test, mean WCMC student scores were significantly higher than for WCMC-Q students (p = 0.02); however, no difference was found in mean post-test scores (p = 0.895). There was no association between the scores on the OSCE (mean = 7.01, sd = 1.00) and the post-test (p = 0.683) even after adjusting for a possible evaluators' effect (Table 2) . Clinical Skills Course was effective in enhancing student knowledge in both Qatar and New York as evidenced by the significant improvement in scores from the pre-to post-tests. The course was able to bring WCMC-Q student scores and presumably knowledge up to the same level as WCMC students. Students performed well on the OSCE, suggesting that the course was able to teach them the critical actions required of a first responder. The lack of association between the post-test and OSCE scores suggests that student knowledge does not independently predict ability to learn and demonstrate critical actions required of a first responder. Future studies will evaluate whether the course affects the students' clinical practice. Assess breathing 3 Assess circulation 4 Call EMS 5 Call EMS and assess ABCs prior to other interventions 6 Immobilize 7 Localize and control bleeding 8 Splint fractured extremity and skills specific to wilderness medicine by incorporating simulated medical scenarios into a day-long adventure race. This event has gained acceptance nationally in wilderness medical circles as an excellent way to appreciate the challenges of wilderness medicine, however its effectiveness as a teaching tool has not yet been verified. Objectives: The objective of this study was to determine if improvement in simulated clinical and didactic performance can be demonstrated by teams participating in a typical MedWAR event. Methods: We developed a complex clinical scenario and written exam to test the basic tenets that are reinforced through the MedWAR curriculum. Teams were administered the test and scored on a standardized scenario immediately before and after the 2011 Midwest MedWAR race. Teams were not given feedback on their pre-race performance. Scenario performance was based on the number of critical actions correctly performed in the appropriate time frame. Data from the scenario and written exams were analyzed using a standard paired difference t-test. Results: A total of 31 teams participated in both the pre-and post-event scenarios. The teams' pre-race scenario performance was 71.0% (sd = 17.0, n = 31) of critical actions met compared to a post-race performance of 89.7 % (sd = 11.4, n = 31). The mean improvement was 18.7% (sd = 18.7, n = 31, 95% CI 12. 1, 25. 3) with a significant paired two-tailed t-test (p £ 0.01). A total of 95 individual subjects took the written pre-and posttests. The written scores averaged pre-race 84.5% (sd = 12.5, n = 95) and post-race 88.7% (sd = 11.5, n = 95). The mean improvement was 4.2% (sd = 11.7, n = 95, CI )7.5, 15.9), with a significant paired twotailed t-test (p £ 0.01). Conclusion: MedWAR participants demonstrated a significant improvement in both written exam scores and the management of a simulated complex wilderness medical scenario. This strongly suggests that MedWAR is an effective teaching platform for both wilderness medicine knowledge and skills. Palliative Methods: ED residents and faculty of an urban, tertiary care, Level I trauma center were asked to complete an anonymous survey (6/2010-10/2011). Participants ranked 22 statements on a five-point Likert scale (1 = strongly disagree-5 = strongly agree). Statements covered four main domains of barriers related to: 1) education/training, 2) communication, 3) ED environment; 4) personal beliefs. Respondents were also asked if they would call PC consult for 15 ED clinical scenarios (based on established triggers). Results: 30/45 (67%) eligible participants completed the survey (23 residents, 7 faculty), average age was 31 years, 52% (15/29) male, and 58% (15/26) Caucasian. Respondents identified two major barriers to ED-PC provision: lack of 24 hour availability of PC team (mean score 4.4) and lack of access to complete medical records (4.2). Listed domain barriers included: communication-related issues (mean 3.3) like access to family or primary providers, ED environment (2.8) for example chaotic setting with time-constraints, education/training (2.7) related to pain/PC, and personal beliefs regarding end-of-life (2.5). All respondents agreed that they would call PC consult for a 'hospice patient in respiratory distress', and a majority (73%) would consult PC for 'massive intracranial hemorrhage, traumatic arrest, and metastatic cancer'. However, traditional in-patient triggers like frequent re-admits for organ failure issues (dementia, congestive heart failure, and obstructive pulmonary disease exacerbations) were infrequently (10%) chosen for PC consult. Conclusion: To enhance PC provision in the ED setting, two main ED physician perceived barriers will likely need to be addressed: lack of access to medical records and lack of 24-7 availability of PC team. ED physicians may not use the same criteria to initiate PC consults as compared to the traditionally established inpatient PC consult trigger models. Percent of charts with an MSE by AIT prior to resident evaluation (a measure of reduced diagnostic uncertainty and decision-making), (4) ED volume. Results: There were no educationally significant differences in productivity or acuity between the pre-AIT and post-AIT groups. MSE was recorded in the chart prior to resident evaluation in 10.9% of cases. ED volume rose by 9.0% between periods. Conclusion: AIT did not affect productivity or acuity of patients seen by EM2s. While some volume was directed away from residents by AIT (patients treated-andreleased by AIT only), overall volume increased and made up the difference. This is similar to previously reported rankings that program directors gave to the same criteria. Although medical students agreed with program directors on the importance of most aspects of the NRMP application areas of discordance included higher medical student ranking for extracurricular activities and a lower relative ranking for AOA status than program directors. This can have implications for medical student mentoring and advising in the future. Background: Emergency care of older adults requires specialized knowledge of their unique physiology, atypical presentations, and care transitions. Older adults often require distinctive assessment, treatment and disposition. Emergency medicine (EM) residents should develop expertise and efficiency in geriatric care. Older adults represent over 25% of most emergency department (ED) volumes. Yet many EM residencies lack curricula or assessment tools for competent geriatric care. The Geriatric Emergency Medicine Competencies (GEMC) are high-impact geriatric topics developed to help residencies meet this demand. Objectives: To examine the effect of a brief GEMC educational intervention on EM resident knowledge. Methods: A validated 29-question didactic test was administered at six EM residencies before and after a GEMC focused lecture delivered summer and fall of 2009. Scores were analyzed as individual questions and in defined topic domains using a paired Student's t-test. Results: A total of 301 exams were included. The testing of didactic knowledge before and after the GEMC educational intervention had high internal reliability (87.9%). The intervention significantly improved scores in all domains (Table 1) . Graded increase in geriatric knowledge occurred by PGY year with the greatest improvement seen at the PGY 3 Level (Table 2) . Conclusion: Even a brief GEMC intervention had a significant effect on EM resident knowledge of critical geriatric topics. A formal GEMC curriculum should be considered in training EM residents for the demands of an ageing population. The overall procedure experience of this incoming class was limited. Most R1s had never received formal education in time management, conflict of interest management, or safe patient trade-off. The majority lacked confidence in their acute and chronic pain management skills. These entry level residents lacked foundational skill levels in many knowledge areas and procedures important to the practice of EM. Ideally medical school curricular offerings should address these gaps; in the interim, residency curricula should incorporate some or all of these components essential to physician practice and patient safety. Background: The American Heart Association and International Liaison Committee on Resuscitation recommend patients with return of spontaneous circulation following cardiac arrest undergo post-resuscitation therapeutic hypothermia. In post-cardiac arrest patients presenting with a rhythm of VF/VT, therapeutic hypothermia has been shown to reduce neurologic sequelae and decrease overall mortality. Objectives: To explore clinical practice regarding the use of therapeutic hypothermia and compare survival outcomes in post-cardiac arrest patients. A secondary outcome was to assess whether the initial presenting cardiac arrest rhythm (ventricular fibrillation/ventricular tachycardia (VF/VT) versus pulseless electrical activity (PEA) or asystole) was associated with differences in outcomes. Methods: A retrospective medical record review was conducted for all adult ( ‡18 years) post-cardiac arrest patients admitted to the ICU of an academic tertiary care centre (annual ED census 150,000) from 2006-2007. Data were extracted using a standardized data collection tool by trained research personnel. Results: 200 patients were enrolled. Mean (SD) age was 66 (16) and 56.5% were male. Of 58 (29.0%) patients treated with hypothermia, 27 (46.6%) presented with an initial rhythm of VF/VT and 31 (53.4%) presented with PEA or asystole. Nine (33.3%) patients with VF/VT were treated with therapeutic hypothermia and discharged from hospital compared to 2 (6.4%) patients with PEA or asystole (D 26.9%; 95% CI: 6.4%, 46.3%). Of 142 patients not treated with hypothermia, 37 (26.1%) presented with VF/VT, 93 (65.5%) presented with PEA or asystole, and 12 (8.4%) initial rhythms were unknown. Fifteen (40.5%) patients with VF/VT, not treated with hypothermia, were discharged from hospital compared to 13 (13.9%) patients with PEA or asystole (D 26.6%; 95% CI: 10.0%, 43.5%). Regardless of initial presenting rhythm or initiation of therapeutic hypothermia, 37 (88.1%) discharged patients had good neurological function as assessed by the cerebral performance category (CPC score 1-2). Conclusion: Although recommended, post-cardiac arrest therapeutic hypothermia was not routinely used. Patients with VF/VT and treated with hypothermia had better outcomes than those with PEA or asystole. Further research is needed to assess whether cooling patients with presenting rhtyhms of PEA or asystole is warranted. Racial Background: Chronic obstructive pulmonary disease (COPD) is a major public health problem in many countries.The course of the disease is characterised by episodes, known as acute exacerbations (AE), when symptoms of cough, sputum production, and breathlessness become much worse. The standard prehospital management of patients suffering from an AECOPD includes oxygen therapy, nebulised bronchodilators, and corticosteroids. High flow oxygen is used routinely in prehospital areas for breathless patients with COPD. There is little high quality evidence on the benefits or potential dangers in this setting but audits have shown increased mortality, acidosis, and hypercarbia in patients with AECOPD treated with high flow oxygen. Objectives: To compare standard high flow oxygen treatment with titrated oxygen treatment for patients with an AECOPD in the prehospital setting. Methods: Cluster randomized controlled parallel group trial comparing high flow oxygen treatment with titrated oxygen treatment in the prehospital setting. In an intention to treat analysis (n = 405), the risk of death was significantly lower in the titrated oxygen arm compared with the high flow oxygen arm for all patients and for the subgroup of patients with confirmed COPD (n = 214). Overall mortality was 9% (21 deaths) in the high flow oxygen arm compared with 4% (7 deaths) in the titrated oxygen arm; mortality in the subgroup with confirmed COPD was 9% (11 deaths) in the high flow arm compared with 2% (2 deaths) in the titrated oxygen arm. Titrated oxygen treatment reduced mortality compared with high flow oxygen by 58% for all patients (p = 0.02) and by 78% for the patients with confirmed chronic obstructive pulmonary disease (p = 0.04). Patients with COPD who received titrated oxygen according to the protocol were significantly less likely to have respiratory acidosis or hypercapnia than were patients who received high flow oxygen. Conclusion: Titrated oxygen treatment significantly reduced mortality, hypercapnia, and respiratory acidosis compared with high flow oxygen in AECOPD. These results provide strong evidence to recommend the routine use of titrated oxygen treatment in patients with breathlessness and a history or clinical likelihood of COPD in the prehospital setting. (Originally submitted as a ''late-breaker.'') Trial registration Australian New Zealand Clinical Trials Register ACTRN12609000236291. Background: Toxic particulates and gases found in ambulance exhaust are associated with acute and chronic health risks. The presence of such materials in areas proximate to ED ambulance parking bays, where emergency services' vehicles are often left running, is potentially of significant concern to ED patients and staff. Objectives: Investigators aimed to determine whether the presence of ambulances correlated with ambient particulate matter concentrations and toxic gas levels at the study site ED. Methods: The Ambulance Exhaust Toxicity in Healthcare-related Exposure and Risk [AETHER] program conducted a prospective observational study at an academic urban ED / Level I trauma center. Environmental ambient gas was sampled over a continuous five-week period from September to October 2011. Two sampling locations in the public triage area (public patient dropoff area without ambulances) and three sampling locations in the ambulance triage area were randomized for 24-hour monitoring windows with a temporal resolution of 2 minutes to obtain 7 days of non-contiguous data for each location. Concentrations of particulate matter less than 2.5 microns in aerodynamic size (PM2.5), oxygen, hydrogen sulfide (H 2 S), and carbon monoxide (CO) as well as lower explosive limit for methane (LEL) were monitored with professionally calibrated devices. Ambulance traffic was recorded through offline review of 24/7 security video footage of the site's ambulance bays. Results: 4,118 measurements at the public triage nurse desk space revealed PM2.5 concentrations with a mean of 21.32 ± 27.01 lg/m 3 (median 15.95 lg/m 3 ; maximum 1,152.58 lg/m 3 ). 4,867 ambulance triage nurse desk space PM2.5 concentrations recorded a mean of 60.45 ± 53.38 lg/m 3 (p < 0.0001, unpaired t test; median 43.37 lg/m 3 ; maximum 580.78 lg/m 3 ). Oxygen levels remained steady throughout the study period; CO, H 2 S, and LEL were not detected. Ambulance activity levels had the highest correlations with PM2.5 concentrations at the ambulance triage foyer (r = 0.47) and desk area (r = 0.42) where patients wait and ED staff work 8-12 hr shifts. Conclusion: ED spaces proximate to ambulance parking bays had higher levels of PM2.5 than areas without ambulance traffic. Concentrations of ambient particulate matter in acute care environments may pose a significant health threat to patients and staff. An EMS ''Pit Crew'' Model Improves EKG And STEMI Recognition Times In Simulated Prehospital Chest Pain Patients Sara Y. Baker 1 , Salvatore Silvestri 1 , Christopher D. Vu 1 , George A. Ralls 1 , Christopher L. Hunter 1 , Zack Weagraff 2 , Linda Papa 1 1 Orlando Regional Medical Center, Orlando, FL; 2 Florida State University College of Medicine, Orlando, FL Background: Prehospital teams must minimize time to EKG acquisition and STEMI recognition to reduce overall time from first medical contact to reperfusion. Auto-racing ''pit crews'' model rapid task completion by pre-assigning roles to team members. Objectives: We compared time-to-completion of key tasks during chest pain evaluation in EMS teams with and without pre-assigned roles. We hypothesized that EMS teams using the ''pit crew'' model would improve time to recognition and treatment of STEMI patients. Methods: A randomized, controlled trial of paramedic students was conducted over 2 months at Orlando Medical Institute, a state-approved paramedic training center. We compared a standard EMS chest pain management algorithm (control) with a pre-assigned tasks (''pit crew'') algorithm (intervention) in the evaluation of simulated chest pain patients. Students were randomized into groups of three; intervention and control groups did not interact after randomization. All students reviewed basic prehospital chest pain management and either the standard or pre-assigned tasks algorithm. Groups encountered three simulated patients. Laerdal SimManÒ software was used track completion of tasks: taking vital signs, IV access, EKG acquisition and interpretation, ASA administration, hospital STEMI notification, and total time on scene. Results: We conducted 54 simulated-patient encounters (30 control / 24 intervention encounters). Mean time-to-completion of each task was compared in the control and intervention groups respectively. Time to obtain vital signs was 4:18 vs. 2:21 min (P = 0.001); time to ASA administration was 3:54 vs 2:00 min (P < 0.001); time to EKG acquisition was 5:39 vs 3:42 min (P < 0.001); time to EKG interpretation was 6:43 vs 4:21 min (P < 0.001); time to IV access was 5:42 vs 4:45 min (P = 0.05); time to STEMI notification was 7:19 vs 4:26 min (P < 0.001); and time to scene completion was 9:02 vs 5:27 min (P < 0.001). Conclusion: Paramedic student teams with pre-assigned roles (the ''pit crew'' model) were faster to obtain vital signs, administer ASA, acquire and interpret the EKG, STEMI notification, and overall time on scene during simulated patient encounters. Further study with experienced EMS teams in actual patient encounters is necessary to confirm the relevance of these findings. Background: Use of automated external defibrillators (AED) has remained low in the U.S. Understanding the effect of neighborhoods on the probability of having an AED used in the setting of a public arrest may provide important insights for future placement of AEDs. Objectives: To determine associations between the racial and income composition of neighborhoods (as defined by U.S. census tracts), individual arrest characteristics, and whether bystanders or first responders initiate AED use. Methods: Cohort study using surveillance data prospectively submitted by emergency medical services systems and hospitals from 29 U.S. sites to the Cardiac Arrest Registry to Enhance Survival between October 1, 2005 and December 31, 2009 . Neighborhoods were defined as high-income vs. low-income based on the median household income being above or below $50,000 and as white or black if >90% of the census tract was of one race. Neighborhoods without a predominant racial composition were defined as integrated. Arrests that occurred within a public location (excluding medical facilities and airports) were eligible for inclusion. Hierarchical multi-level modeling, using Stata v11.0, was used to determine the association between individual and census tract characteristics on whether an AED was used. Results: Of 2,769 eligible cases, an AED was used in 1336 arrests (48.2%) by a first responder (n = 1,127, 40.8%) or bystander (n = 209, 7.5%). Patients whose arrest was witnessed (odds ratio [OR] 1.26; 95% confidence interval [CI] 1.06-1.50) were more likely to have an AED used (table) . When compared to high-income white neighborhoods, arrest victims in low-income black neighborhoods were least likely to have an AED used (OR 0.54; 95% CI 0.33-0.87). Arrest victims in lowincome white (OR 0.57; 95% CI 0.32-1.02) and lowincome integrated (OR 0.70; 95% CI 0.51-0.96) were also less likely to have an AED used. Conclusion: Arrest victims in black and low-income neighborhoods are least likely to have an AED used by a layperson or first responder. Future research is needed to better understand the reasons for low rates of AED use for cardiac arrests in these neighborhoods. The Impact of an Educational Intervention on the Pre-Shock Pause Interval among Patients Experiencing an Out-Of-Hospital Cardiac Arrest Jonathan Studnek 1 , Eric Hawkins 1 , Steven Vandeventer 2 1 Carolinas Medical Center, Charlotte, NC; 2 Mecklenburg EMS Agency, Charlotte, NC Background: Pre-shock pause duration has been associated with survival to hospital discharge (STD) among patients experiencing out-of-hospital cardiac arrest (OOHCA) resuscitation. Recent research has demonstrated that for every 5-second increase in this interval there is an 18% decrease in STD. Objectives: Determine if a decrease in the pre-shock pause interval for patients experiencing OOHCA could be realized after implementation of an educational intervention. Methods: This was a retrospective analysis of data obtained from a single ALS urban EMS system from 1/1/2010 to 12/31/10 and 8/1/11 to 11/6/2011. In August 2011, an educational intervention was designed and delivered to approximately 150 paramedics emphasizing the importance of reducing the time off chest during CPR. Specifically, the time period just prior to defibrillation was emphasized by having rescuers count every 20th compression and pre-charge the defibrillator on the 180th compression. In order to determine if this change resulted in process improvement, 12 months of data were assessed before and 3 months after the educational intervention. Pre-shock pause was the outcome variable and was defined as the time period after compressions ceased until a shock was delivered. This interval was measured by a CPR feedback device connected to the defibrillator. Inclusion criteria were adult patients who required at least one defibrillation and had the CPR feedback device connected during the defibrillation attempt. Analysis was descriptive utilizing means and 95% CI as well as Wilcoxon rank sum test to assess difference between the two time periods. Results: In the pre-intervention period there were 117 patients who received 211 defibrillations compared to 30 patients receiving 71 defibrillations in the post-intervention phase. The mean duration of the pre-shock pause pre-intervention was 35 seconds (95% CI 20-50) while the post-intervention duration was 9 seconds (95% CI 7-12). The difference in pre-shock pause duration was statistically significant with p < 0.001. Conclusion: These data indicate that after a simple educational intervention emphasizing decreasing time off chest prior to defibrillation the pre-shock pause duration decreased. Future research must describe the sustainability of this intervention as well as the effects this process measure may have on outcomes such as survival to hospital discharge. Background: The Broselow tape (BT) has been used as a tool for estimating medication dosing in the emergency setting. The obesity trend has demonstrated a tendency towards insufficient pediatric weight estimations from the BT, and thus potential under-dosing of resuscitation medications. Objectives: This study compared drug dosing based on the BT with dosing from a novel electronic tool (ET) that accounts for provider estimation of body habitus. Methods: Data were obtained from a prospective convenience sample of children ages 1 to 8 years arriving to a pediatric emergency department. A clinician performed an assessment of body habitus (average/underweight, overweight, or obese), blinded to the patient's actual weight and parental weight estimate. Parental estimate of weight and measured length and weight were collected. Epinephrine dosing was calculated from the measured weight, the BT measurement, as well as from a smart-phone tool based on the measured length and clinician's estimate of body habitus, and a modified tool (MT) incorporating the parent estimate of habitus. The Wilcoxson rank-sum test was used to compare median percent differences in dosing. Results: One hundred children (mean age 3 years) were analyzed; 47% were overweight or obese. Clinicians correctly identified children as overweight/obese 23% of time (CI 0.12-0.38). Adding parent estimate of weight improved this to a sensitivity of 74% (CI 0.59-0.86). The median difference between the weight-based epinephrine dose and BT dose was 11%. For the ET the median difference from the weight-based dose was 7% (p = 0.05 compared to the BT), and for the MT was 1.7% (p < 0.01 compared to the BT). When a clinically significant difference was defined as ±10% of the actual dose, BT was within that range 40% of the time, ET was within range 56% of the time (p = 0.02), and MT was within range 64% of the time ( Background: In most out-of-hospital cardiac arrest (OHCA) events, a call to 9-1-1 is the first action by bystanders. Accurate diagnosis of cardiac arrest by the call taker depends on the caller's verbal description. If cardiac arrest is not suspected, then no telephone CPR instructions will be given. Objectives: We measured the effect of a change in the EMS call taker question sequence on the accuracy of diagnosis of cardiac arrest by 9-1-1 call takers. Methods: We retrospectively reviewed the Cardiac Arrest Registry to Enhance Survival (CARES) dataset for January 1, 2009 through June 30, 2011 from a city, population 750,000, with a longstanding telephone CPR program (APCO). We included OHCA cases of any age who were in arrest prior to the arrival of EMS and for whom resuscitation was attempted. In early 2010, 9-1-1 call takers were taught to follow a revised telephone script that emphasized focused questions, assertive control of the caller, and provision of hands-only CPR instructions. The medical director personally explained the reasons for the changes, emphasizing the importance of assertive control of the caller and the comparative safety of chest compressions in patients not in cardiac arrest. Beginning in 2010, call recordings were reviewed regularly with feedback to the call taker by the 9-1-1 center leadership. The main outcome measure was sensitivity of the 9-1-1 call taker in diagnosing cardiac arrest. Bystander CPR was reported by EMS crews attending the event. We compared 2009 with 2010 and 2011 using the v 2 test and odds ratios (OR). Results: There were 504 OHCA cases in 2009, 457 cases in 2010, and 287 in the first half of 2011 (68/100,000 population). The mean age was 57 ± 21 years, and 27% of the events were witnessed. Before the revision, 40% of OHCA cases were identified by 9-1-1 dispatchers; and after the revised questioning sequence, 74% were identified (OR 4.3, 95% CI 3.2-5.6). The false positive rate changed little (from 56/month to 72/month). The mean time to question callers was unchanged (53 vs 51 seconds). Bystander CPR was performed in 37.3% of events in 2009, 39.2% in 2010, and 49.1% of events in 2011 (p < 0.001). Conclusion: Emphasis on scripted assessment improved sensitivity without loss of specificity in identifying OHCA. With repeated feedback, it translated to an increase in victims receiving bystander CPR. In An Out-Of Hospital Cardiac Arrest Population Confirmed By Autopsy Salvatore Silvestri, Christopher Hunter, George Ralls, Linda Papa Orlando Regional Medical Center, Orlando, FL Background: Quantitative end-tidal carbon dioxide (ETCO 2 ) measurements (capnography) have consistently been shown to be more sensitive than qualitative (colorimetric) ones, and the reliability of capnography for assessing airway placement in low perfusion states has sometimes been questioned in the literature. Objectives: This study examined the rate of capnographic waveform presence of an intubated out-of-hospital cardiac arrest cohort and its correlation to endotracheal tube location confirmed by autopsy. Our hypothesis is that capnography is 100% accurate in determining endotracheal tube location, even in low perfusion states. Methods: This cross-sectional study reviewed a detailed prehospital cardiac arrest database that regularly records information using the Utstein style. In addition, the EMS department quality manager routinely logs the presence of an alveolar (four-phase) capnographic waveform in this database. The study population included all cardiac arrest patients from January 1, 2009 through December 31, 2009 managed by a single EMS agency in Orange County, Florida. Patients were included if they had endotracheal intubation performed, had capnographic measurement obtained, failed to regain return of spontaneous circulation (ROSC), and had an autopsy performed. The main outcome was the correlation of the presence of an alveolar waveform and the location of the ETT at autopsy. Results: During the study period, 921 cardiac arrests were recorded. Of these, 263 had an advanced airway placed (ETT or laryngeal tube airway), and no ROSC. Of the 263 advanced airway cases, 73 were managed with an ETT. Autopsies were performed on 30 of these patients and resulted in our study cohort. The location of the ETT at autopsy was recorded on all 30 of these cases. Capnographic waveforms were recorded in the field in all 30 of these study patients, and 100% of the tubes were located within the trachea at autopsy. The sensitivity of capnography in determining proper endotracheal tube location was 100% in this study. Conclusion: In our study, the presence of a capnographic waveform was 100% reliable in confirming proper placement of endotracheal tubes placed in outof-hospital patients with poor perfusion states. Results: Over 60 variables were presented to the 34 EMS medical directors responding (100% survey population captured). Among the myriad of responses, 14 (42%) initiate cardiopulmonary resuscitation (CPR) at 30 compressions to 2 ventilations consistent with IL-COR/AHA guidelines. Seven (21%) initiate continuous chest compressions from the start of CPR with no pause and interposed ventilations. Nine (26%) begin chest compressions only during the first 2-3 minutes, with either passive oxygenation by oxygen mask (six; 18%) or no oxygen (three; 9%). Airway management following non-invasive oxygenation and ventilation by primary endotracheal intubation occurs in 12 systems (35%), while six (18%) use supraglottic devices. Fourteen (42%) allow paramedics to decide between endotracheal and supraglottic device placement. Thirty systems (88%) utilize continuous waveform capnography. The initial approach to non-EMS witnessed ventricular fibrillation is chest compression prior to first defibrillation in 30 systems (88%). Eighteen systems (52%) escalate defibrillation energy settings, with four systems (12%) utilizing dual sequential defibrillation. Twenty (59%) initiate therapeutic hypothermia in the field. Conclusion: Wide variability in CA care standards exists in America's largest urban EMS systems in mid-2011, with many current practices promoting more continuity in chest compressions than specified in the 2010 ILCOR/AHA guidelines. Endotracheal intubation, a past mainstay of CA airway management, is deemphasized in many systems. Immediate defibrillation of non-EMS witnessed ventricular fibrillation is uncommon. Objectives: Determine the out-of-hospital cardiac arrest survival in this area of Puerto Rico using the Utstein method. Methods: Prospective observational cohort study of adult patients presenting with an out-of-hospital cardiac arrest to the UPR Hospital ED. Study endpoints will be survival and neurologically intact survival at hospital discharge, 6 months, and 12 months. Results: A total of 144 consecutive cardiac arrest events were analyzed for a period of 2 years. One-hundred fifteen events met criteria for primary cardiac etiology (79.86%). The average age for this group was 68.47 years. There were 45 female (39.13%) and 70 male (60.86%) participants. The average time to start CPR was 14.60 minutes. Transportation to the ED was 71.3% by EMS and 25.22% by private vehicle. A total of 68 events were witnessed (59.13%). The survival rate to hospital admission was 23.66%. The overall cardiac arrest survival was 9.30% and overall neurologically intact survival was 4.30%. Neurologically intact survival at 6 and 12 months was 2.15%. The rate of bystander CPR in our population was 16.13% with a survival rate of 6.66%. Conclusion: Survival from out-of-hospital cardiac arrest in the area served by the UPR Hospital is low but comparable to other cities in the US as reported by the CDC Cardiac Arrest Registry to Enhance Survival (CARES). This low survival rate might be due to low bystander CPR rate and prolonged time to start CPR. Background: Hyperventilation has been directly correlated with increased mortality for out-of-hospital CPR. EMS providers may hyperventilate patients at levels above national BLS guidelines. Real-time feedback devices, such as ventilation timers, have been shown to improve CPR ventilation rates towards BLS standards. It remains unclear if the combination of a ventilation timer and pre-simulation instruction would influence overall ventilation rates and potentially reduce undesired hyperventilation. Objectives: This study measured ventilation rates of standard CPR (and pre-instruction on effects of hyperventilation) compared to CPR with the use of a commercial ventilation timer (and pre-instruction on effects of hyperventilation). We propose that use of a ventilation timer, measuring and displaying to EMS providers real-time ventilations delivered, will have no difference in ventilation rates when comparing these groups. Methods: This prospective study placed EMS providers into four groups: Two controls measuring ventilation rates before (1a) and after instruction (1b) on the deleterious effects of hyperventilation, and a concurrent intervention pair with before (2a) and after instruction (2b), with the second pair measuring ventilation rates with a ventilation timer that provides immediate feedback on respirations given. Ventilation rates were measured for a 60-second period after one minute of simulated CPR using mannequins. The control set without instruction (1a, n = 12) averaged 14.21 breaths (95% CI = 10.31-18.11) and with instruction (1b, n = 13) averaged 20.23 breaths (95% CI = 16.16-24.30). The intervention set without instruction (2a, n = 11) averaged 13.04 breaths (95% CI = 9.29-16.78) and with instruction (2b, n = 13) averaged 11.77 breaths (95% CI = 8.02-15.51). There was a significant improvement (p = 0.016) in ventilation rates with use of a ventilation timer (control group versus intervention group regardless of pre-instruction). There was no statistically significant difference between groups with respect to instruction alone (p = 0.223). Conclusion: The use of a ventilation timer significantly reduced overall ventilation rates, providing care closer to BLS guidelines. The addition of pre-simulation instruction added no significant benefit to reducing hyperventilation. Background: In 2010, the American Heart Association (AHA) recommended a compression rate of (ROC) 100/ min and a depth of compressions (DOC) at least 2 inches for effective CPR. As an educational tool for lay rescuers, the AHA as adopted the catch phrase ''Push Hard, Push Fast''. Objectives: In this IRB-exempt study, we sought to determine if persons without formal CPR training could perform non-ventilated CPR as well as those who have been trained in the past or those currently certified. Methods: A convenience sample of patrons of the New York State Fair was asked to perform 2 minutes of hands-only CPR on a Prestan PP-AM-100M adult CPR manikin. These devices provide visual indicators of acceptable rate and depth of compressions. Each subject was video recorded on a Dell Latitude 620 laptop computer with a Logitech Quick Cam using Logitech Quick Cam 8.4.6 for Windows software. Results: A total of 175 volunteers (74 male, 102 female) aged 16-68 years participated: 52 were never certified (NC) in CPR, 73 were previously certified (PC), and 50 were currently certified (CC). There was no difference in age across the groups. The CC group had a higher proportion of females (chi-square = 9.71, p < 0.008). CC volunteers sustained ROC and DOC for an average of 57.1 seconds as compared to an average of 18.5 seconds (PC) and 2.3 seconds (NC) respectively. (F = 27.8, p < 0.001). The CC maintained ROC of closer to 100/ min (mean 111.6/min) when compared to the PC (mean 85.3/min) and NC (mean 86.0/min) groups (F = 14.7, p < 0.001). A higher proportion of volunteers of the CC group were able to perform adequate DOC (chi-square = 11.2, p < 0.004), and hand placement (chisquare = 19.21, p < 0.001) when compared to the other two groups. Conclusion: Compared to the target ROC and DOC, none of the groups did well and only 14 subjects met target ROC/DOC. Increased out-of-hospital cardiac arrest survivability due to lay rescuer intervention is only assured if CPR is effectively administered. The effect and benefit of maintaining formal CPR training and certification is clear. Background: More than 300,000 out-of-hospital cardiac arrests (OHCAs) occur annually in the United States (US). Automated external defibrillators (AEDs) are life-saving devices in public locations that can significantly improve survival. An estimated 1 million AEDs have been sold in the US; however, little is known about whether locations of AEDs match OH-CAs. These data could help determine optimal placement of future AEDs and targeted CPR/AED training to improve survival. Objectives: We hypothesized that the majority (>50%) of AEDs are not located in close proximity (200 feet) to the occurrence of cardiac arrests in a major metropolitan city. Methods: This was a retrospective review of prospectively collected cardiac arrest data from Philadelphia EMS from January 1, 2008 until December 31, 2010. Included were OHCAs of presumed cardiac etiology in individuals 12 years of age or older. Excluded were OH-CAs of presumed traumatic etiology, cases where resuscitation was terminated at the scene, and those dead on arrival. AED locations in Philadelphia were obtained from MyHeartMap, a database of installed and wallmounted AEDs in Pennsylvania. We used GIS mapping software to visualize where OHCAs occurred relative to where AEDs were located and to determine the radius of OHCAs to AEDs. Arrests within a 200, 400, and 600 foot radius of AEDs were identified using the attribute location selection option in ArcGIS. The lengths of radii were estimated based on the average time it would take for a person to walk to and from an AED (200 feet2 minutes; 400 feet4 minutes; 600 feet6 minutes). Results: We mapped 3,483 OHCAs and 2,314 AEDs in Philadelphia County. OHCAs occurred in males (55%; 1916/3483) and the mean age was 65.4 years. Ventricular fibrillation occurred in 19% (662/3483). AEDs were primarily located in schools/universities (30%), office buildings (22%), and residential buildings (4%). AEDs were not identified within 200 feet in 93% (3,239) of OHCAs, within 400 feet of 90% (3,135) of OHCAs, and within 600 feet in 79% (2,752) of OHCAs. The figure (large black circles) illustrates AED/OHCA within 200 feet on the left and 600 feet on the right. Conclusion: AEDs were rarely close to the locations of OHCAs, which may be a contributor to low cardiac arrest survival rates. Innovative models to match AED availability with OHCAs should be explored. (Originally submitted as a ''late-breaker.'') Potential Background: Early and frequent epinephrine administration is advocated by ACLS; however, epinephrine research has been conducted primarily with standard CPR (STD). Active compression-decompression CPR with an impedance threshold device (ACD-CPR + ITD) has become the standard of care for out of hospital cardiac arrest in our area. The hemodynamic effects of IV epinephrine under this technique are not known. Objectives: To determine the hemodynamic effects of IV epinephrine in a swine model undergoing ACD-CPR+ITD. Methods: Six female swine (32 ± 1Kg) were anesthetized, intubated, and mechanically ventilated. Intracranial, thoracic aorta, and right atrial pressures were recorded via indwelling catheters. Carotid blood flow (CBF) was recorded via Doppler. ETC0 2 , Sp0 2 , and EKG were monitored. Ventricular fibrillation was induced and went untreated for 6 minutes. Three minutes each of standard CPR (STD), STD-CPR+ITD, and ACD-CPR+ITD was preformed. At minute 9 of the resuscitation, 40 lg/Kg of IV epinephrine was administered and ACD-CPR+ITD was continued for 1 minute. Statistical analysis was performed with a paired t-test. Results: Aortic pressure and calculated cerebral and carotid perfusion pressures increased from STD < STD+ITD < ACD-CPR+ITD (p £ 0.001). Epinepherine administered during ACD-CPR+ITD signficantly increased mean aortic (29 ± 5vs42 ± 12, p = 0.01), cerebral (12 ± 5 vs 22 ± 10, p = 0.01), and coronary perfusion pressures (8 ± 7 vs 17 ± 4, p = 0.02); however, mean CBF and ETCO 2 decreased (respectively 29 ± 15 vs 14 ± 7.0, p = 0.03; 20 ± 7 vs 18 ± 6, p = 0.04). Conclusion: The administration of epinepherine during ACD-CPR+ITD signficantly increased markers of macrocirculation, while significantly decreasing ETCO 2 , a proxy for organ perfusion. While the calculated cerebral perfusion pressures increased, the directly measured CBF decreased. This calls into question the ability of calculated perfusion pressures to accurately reflect blood flow and oxygen delivery to end organs. Hypoxia Background: During cardiac arrest most patients are placed on 100% oxygen with assisted ventilations. After return of spontaneous circulation (ROSC), 100% oxygen is typically continued for an extended time. Animal data suggest that immediate post-arrest titration of oxygen by pulse oximetry produces better neurocognitive/ histologic outcomes. Recent human data suggest that arterial hyperoxia is associated with worse outcomes. Objectives: To assess the relationship between hypoxia, normoxia, and hyperoxia post-arrest and outcomes in post-cardiac arrest patients treated with therapeutic hypothermia. Methods: We conducted a retrospective chart review of 190 post-arrest patients admitted to an academic medical center between January, 2000 and December, 2007 who had arterial blood gases (ABG) drawn after ROSC. Demographic variables were analyzed using ANOVA and chi-square tests as appropriate. Unadjusted logistic regression analyses were performed to assess the relationship between hypoxia (PaO 2 < 60 mmHg), normoxia (60-300 mmHg), hyperoxia (>300 mmHg), and mortality. Results: On first ABG (190 patients), 37 (19.5%) were hypoxic, 92 (48.4%) normoxic, and 61 (32.1%) hyperoxic. The average age of the cohort was 62.8 years (no difference for hypoxic, normoxic, and hyperoxic patients). Overall mortality was 70.5% (134/190). There were no significant differences between initial heart rate, systolic blood pressure, sex, race, or pre-arrest functional status. In-hospital mortality was significantly higher when the first ABG demonstrated hypoxia (94.6%; 35/ 37) than for normoxia (68.5%; 63/92) or hyperoxia (59%; 36/61). In unadjusted logistic regression analysis of first PaO 2 values, hyperoxia was not associated with increased mortality (OR 0.7; 95% CI 0.3-1.4) but hypoxia was associated with increased mortality (OR 6.1; 95% CI 1.4-27.5). Conclusion: Hypoxia but not hyperoxia on first ABG was associated with mortality in a cohort of post-arrest patients. Background: There are over 330,000 deaths due to cardiac arrest per year in the US. The AHA recommends monitoring the quality of CPR primarily through the use of end tidal CO 2 (ETCO 2 ). The level of ETCO 2 is significantly dependant on minute ventilation and altered by pressor and bicarbonate use. Cerebral oximetry (CereOx) uses near infrared spectroscopy to non-invasively measure oxygen saturation of the frontal lobes of the brain. CereOx has been correlated with cerebral blood flow and jugular vein bulb saturations. Objectives: The objective of this study is to compare the simultaneous measurement of ETCO 2 and CereOx to investigate which monitoring method provides the best measure of CPR quality as defined by return of spontaneous circulation (ROSC). Methods: A prospective cohort of a convenient sample of patients using out-of-hospital and ED cardiac arrest from two large EDs. Patients were monitored simultaneously by ETCO 2 and CereOx during CPR. Patient demographics and arrest data were collected using the Utstein criteria. All patients were monitored throughout the resuscitation efforts. ROSC was defined as a palpable pulse and a measurable blood pressure for a minimum of thirty minutes. Results: Twenty two patients were enrolled with complete data sets; 27% of the subjects had ROSC. Average down time of ROSC subjects was 12 minutes (SD ± 14.6) and 31 minutes (SD ± 17.8) for subjects without ROSC. The inability to obtain a value of 30 either for ETCO 2 or CereOx was 50% and 75% specific with an 80% and 100% NPV respectively for predicting lack of ROSC. Obtaining a value of 30 either for ETCO 2 or CereOx was 66% and 100% sensitive, respectively in identifying ROSC. Subjects with ROSC had sustained values above 30 for 1.25 mins on CereOx and 4.9 mins on ETCO 2 prior to ROSC. The increase in values over a three minute period prior to ROSC was 13.5 on CereOx and 1.3 on ETCO 2 . Conclusion: The inability to obtain a value of 30 on either the ETCO 2 or CereOx strongly predicted lack of ROSC. CereOx provides a larger magnitude and closer temporal increase prior to ROSC than ETCO 2 . Attaining a value of 30 on CereOx was more predictive of ROSC than ETCO 2. An discrepancies due to communicating information to multiple listeners in a short amount of time. This creates a communication barrier not always apparent to practitioners. We examine the perceptions of EMS and ED personnel on the transfer of care and its correlation to missing patient data. Objectives: Evaluate provider perception of information transfer by EMS and ED personnel and compare this to an external observer's objective assessment. Methods: This is a retrospective quality improvement program at an academic Level I trauma center. Transfers of medical and trauma patients from EMS to ED personnel were attended by trained external observers, Research Associates (RA). RA recorded the data communicated: name, age, past medical history (PMH), allergies, medications, events, active problems, vital signs (VS), level of consciousness (LOC), IV access, and treatments given. Then, EMS and ED staff rated their perception of transfer on a 1-10 rating scale. Results: RA evaluated 448 patient transfers (268 medical and 180 trauma). Transfer time did not differ, 4.05 minutes for medical (95% CI: 3.77-4.32), 3.92 minutes for trauma patients (95% CI: 3.53-4.31)(p = 0.57). Missing data between the two groups also did not differ, except LOC and treatment were missed more in medical transfers, while PMH was missed more in the trauma transfers. Comparing the transfers with all VS present (67%, 300/448) and all VS missing (12%, 55/ 448), with all VS missing, there was no difference in perception of transfer for EMS (9.6/10 VS present vs 9.4/10 VS absent) or ED staff (9.5/10 VS present, 9.4/10 VS absent). When all vital signs were missing, RA rated 69.1% of transfers as poor, whereas when all VS were present 80.8% of transfers were considered good. Conclusion: EMS and ED staff felt transfers of care were professional, teams were attentive, and had similar amounts of interruptions for both medical and trauma cases. Their perception of transfer of care was similar even when key information was missing, although external observers rated a significant amount of transfers poorly. Thus, EMS and ED staffs were not able to evaluate their own performance in a transfer of care and external observers were found to be better evaluators of transfers of care. Swati Singh, John Brown, Prasanthi Ramanujam UCSF, San Francisco, CA Background: EMS transports a large number of psychiatric emergencies to emergency departments (ED) across the US. Research on paramedic education related to behavioral emergencies is sparse, but based on expert opinion we know that gaps in paramedic knowledge and training exist. In our system, paramedics triage patients to medical, detoxification, and purely psychiatric destinations, so a paramedic's understanding of these emergencies directly affects the flow of patients in our EDs. Objectives: Our objectives were to understand the gaps in current training and develop a targeted curriculum for field providers with a long term goal of appropriately recognizing and triaging subjects to the ED. Methods: Data were collected using a survey that was distributed during a paramedic association meeting in October 2011. Subjects were excluded if they did not complete the survey. Survey questions addressed demographics of paramedics, frequency of various psychiatric emergencies and their confidence in managing these emergencies. Data were collated, analyzed, and presented as descriptive statistics. Results: Forty-nine surveys were distributed with a response rate of 82% (n = 40/49). Of the respondents, 70% (n = 28) were male and 68% (n = 27) had at least five years experience. Mood, thought, and cognitive disorders were the most frequently encountered presentations and 65% (n = 26) of respondents came across psychiatric emergencies multiple times a week. Many respondents did not feel confident managing agitated delirium (n = 16, 40%), acute psychosis (n = 17, 43%), and intimate partner or elder abuse (n = 14, 35%). A third to a half of the respondents felt they have little or no training in chemical sedation (n = 18, 45%), verbal de-escalation (n = 14, 35%), and triaging patients (n = 21, 53%). Conclusion: We identified a need for a revised curriculum on management of psychiatric emergencies. Future steps will focus on development of a curriculum and change in knowledge after implementation of this curriculum. Background: Prehospital endotracheal intubation has long been a cornerstone of resuscitative efforts for critically ill or injured patients. Paramedic airway management training will need to be modified due to the 2011 ACC/AHA guidelines to ensure maintenance of competency in overall management of airway emergencies. How best to modify the training of paramedics requires an understanding of current experience. Objectives: The purpose of this report is to characterize the airway management expertise of experienced and non-experienced paramedics in a single EMS system. Methods: We retrospectively reviewed all prehospital intubations from an urban/suburban ambulance service (Professional Ambulance, Inc.) over a five-year period (January 01, 2006 to December 31, 2010). Characteristics of airway management by paramedics with 0-5 years of experience (Group 1) were compared to those with greater than 5 years of experience (Group 2). Airway management was guided by Massachusetts statewide treatment protocols governing direct laryngoscopy and all adjunctive approaches. Attempts are characterized by laryngoscope blade passing the lips. Difficult and failed airways were managed with extraglottic devices (EGD) or needle cricothyroidotomy. We reviewed patient characteristics, intubation methods, rescue techniques, and adverse events. Results: 150 patients required airway management: 120 (80%) were performed by Group 1 and 30 (20%) were performed by Group 2. Group 1 was both faster to intubate (1.39 vs 1.83 attempts, p = 0.0035) and less likely to use a rescue device (19.1% vs 50.0%, p = 0.0009). Both are equally likely to go directly to a rescue device (10% vs 10%, p = 1.0). All patients were successfully oxygenated and ventilated with either an endotracheal tube or EGD. No surgical airways were performed and no patients died as a result of a failed airway. Conclusion: While intubation success rates of paramedics with less than and greater than five years of experience are similar, less experienced paramedics use fewer attempts and are less likely to use a rescue device. Both recognize difficult airways and go directly to rescue devices equally. This highlights difficulties faced maintaining competence. Education requirements must be evaluated and redesigned to allow paramedics to maintain competence and emphasize airway management according to the latest resuscitation guidelines. How Well Do EMS 9-1-1 Protocols Predict ED Utilization for Pediatric Patients? Stephanie J. Fessler 1 , Harold K. Simon 1 , Daniel A. Hirsh 1 , Michael Colman 2 1 Emory University, Atlanta, GA; 2 Grady Health Systems, Atlanta, GA Background: The use of emergency medical services (EMS) for low-acuity pediatric problems has been well documented. However, it is unclear how accurately general EMS dispatch protocols predict the subsequent ED utilization for these patients. Objectives: To determine the ED resource utilization rate of pediatric patients categorized as low acuity by 9-1-1 dispatch protocols and then subsequently transferred to a children's hospital. Methods: All transports for pediatric patients from the scene by a large urban general EMS provider that were prioritized as low acuity by initial 9-1-1 dispatch protocols were identified. Protocols were based on the National Academy of Medical Priority Dispatch System, v12. Starting on Jan 1, 2010, 100 consecutive cases of patients transported to three pediatric emergency departments (PED) of a large tertiary care pediatric health care system were reviewed. Demographics, PED visit characteristics, resource utilization, and disposition were recorded. Those patients who received meds other than PO antipyretics, had labs other than a strep test, a radiology study, a procedure, or were not discharged home were categorized into the significant ED resource utilization group. Results: 93% of the patients were African American and either had public insurance or self-pay (86%, 13% respectively). The median age was 11 months (4d-13yr). 54% were female. None of these low-acuity patients were upgraded by EMS operators en route. Upon arrival to the PED, 45% of transported patients were classified into the significant utilization group. Six of the 100 total patients were admitted, including a 2 y/o requiring emergent intubation, an 8 m/o old with a broken CVL, a 6 y/o with sickle cell pain crisis, and a 2 y/o with altered mental status. The remainder of the significant resource utilization group consisted of children needing procedures, anti-emetics, narcotic pain control, labs, and xrays. Conclusion: In this general EMS 9-1-1 system, dispatch protocols for pediatric patients classified as low priority did poorly in predicting subsequent ED utilization with 45% requiring significant resources. Further, EMS operators did not recognize a critical child who needed emergent intervention. Opportunity exists to refine general EMS 9-1-1 protocols for children in order to more accurately define an EMS priority status that better correlates with ultimate needs and resource utilization. The Objectives: Determine if there is an association between a patient's impression of the overall quality of care and his or her satisfaction with provided pain management. It was hypothesized that satisfaction with pain management would be significantly associated with a patient's impression of the overall quality of care. Methods: This was a retrospective review of patient satisfaction survey data initially collected by an urban ALS EMS agency from 1/1/2007 to 8/1/2010. Participants were randomly selected from all patients transported proportional to their paramedic defined acuity; categorized as low, medium, or high with a goal of 100 interviews per month. The proportions of patients sampled from each acuity level were 25% low, 50% medium, and 25% high. Patients were excluded if there was no telephone number recorded in the prehospital patient record or they were pronounced dead on scene. All satisfaction questions used a five-point Likert scale with ratings from excellent to poor that were dichotomized for analysis as excellent or other. The outcome variable of interest was the patient's perception of the overall quality of care. The main independent variable had patients rate the staff who treated them at the scene on their helping to control or reduce their pain. Demographic variables were assessed for potential confounding. Results: There were 2,759 patients with complete data for the outcome and main independent variable with 45.0% male respondents and an average age of 54.1 (SD = 22.7). Overall quality of care was rated excellent by 66.0% of patients while 59.1% rated their pain management as excellent. Of patients who rated their pain management as excellent, 87.9% rated overall quality of care as excellent while only 34.2% of patients rated overall quality excellent if pain management was not excellent. When controlling for potential confounding variables, those patients who perceived their pain management to be excellent were 13.9 (95% CI 11.5-16.9) times more likely to rate their overall quality of care as excellent compared to those with non-excellent perceived pain management. Conclusion: Patients' perceptions of the overall quality of care were significantly associated with their perceptions of pain management. Objectives: The purpose of this study is to determine whether ground-based paramedics could be taught and retain the skills necessary to successfully perform a cricothyrotomy. Methods: This retrospective study was performed in a suburban county with a population of 160,000 and 21,000 EMS calls per year. Participants were groundbased paramedics in a local EMS system who were taught wire-guided cricothyrotomy as part of a standardized paramedic educational update program. As part of the educational program, paramedics were taught wire-guided cricothyrotomy on a simulation model previously developed to train emergency medicine residents. After viewing an instructional video, the participants were allowed to practice using a 16step checklist. Not all of these 16 steps were automatic failures. Each paramedic was individually supervised performing a cricothyrotomy on the simulator until successful; a minimum of five simulations was required. Retention was assessed using the same 16-step checklist during annual skills testing, after a minimum of 6 weeks to a maximum of 3 months posttraining. Results: A total of 55 paramedics completed both the initial training and reassessment during the time period studied. During the initial training phase, 100% (55 of 55) of the paramedics were successful in performing all 16 steps of the wire-guided cricothyrotomy. During the retention phase 87.3% (48 of 55) retained the skills necessary to successfully perform the wire-guided cricothyrotomy. Of the 16-step checklist, most steps were performed successfully by all the paramedics or missed by only 1 of the 55 paramedics. Step #8, which involved removing the needle prior to advancing the airway device over the guidewire, was missed by 34.5% (19 of 55) of the participants. Step #8 was not an automatic failure since most participants immediately self-corrected and completed the procedure successfully. Conclusion: Paramedics can be taught and can retain the skills necessary to successfully perform a wireguided cricothyrotomy on a simulator. Future research is necessary to determine if paramedics can successfully transfer these skills to real patients. Helicopter Emergency Medical Services in Background: Netcare911 is one of the largest private providers of emergency air medical care in South Africa. Each HEMS (helicopter emergency medical service) crew is manned by a physician-paramedic team and is dispatched based on specific medical criteria, time to definitive care, and need for physician expertise. Objectives: To describe the characteristics of Net-care911 air medical evacuations in Gauteng province and to analyze the role of physicians in patient care and effect on call times. Methods: All patients transported by a Netcare911 helicopter over a one year period from January -December 2008 were enrolled in the study. Injury classifications, demographics, procedures, scene and flight times were collected retrospectively from run sheets. Data were described by medians and interquartile intervals. Results: A total of 386 patients were transported on 384 flights originating from the Netcare911 Gauteng helicopter base. Ninety-two percent were traumarelated, with 74% resulting from motor vehicle accidents. Physician expertise was listed 30% of the time as the indication for air medical response. A total of 105 advanced procedures were performed by physicians on 93 patients, including paralytic-assisted intubations, chest tube placement, and cardiac pacing. The median total call time was 46 minutes with 10 minutes spent on scene, compared with 54 and 24 minutes when advanced procedures were performed by HEMS (p < 0.001). Conclusion: Trauma accounts for an overwhelming majority of patients requiring emergency air medical transportation. Advanced medical procedures were performed by physicians in nearly a quarter of the patients. There were significant differences in call times when advanced procedures were performed by HEMS. Objectives: We sought to evaluate the level of awareness and adoption of the off-line protocol guidelines by Utah EMS agencies. Methods: We surveyed all EMS agencies in Utah 18 months after protocol guideline release. Medical directors, EMS captains, or training coordinators completed a short phone survey regarding their knowledge of the EMSC protocol guidelines, and whether their agency had adopted them. In particular, participants were asked about the pain protocol guideline and their management of pediatric pain. Results: Of the 186 agencies, 182 participated in the survey (98%). Of those participating, 15 agencies (8%) were excluded from the analysis: 4 (2%) who only treat adults and 11 (6%) who do not participate in electronic data entry. Of the remaining 171 agencies (94%), 155 (91%) were familiar with the Utah EMSC protocol guidelines; 116 agencies (68%) have either partially or fully adopted the protocol guidelines. 132 agencies (77%) were familiar with the pain treatment protocol guideline; 29 (17%) had adopted it; 34 (21%) planned to either partially or fully adopt the protocol. Overall, 84 agencies (49%) had offline protocols allowing the administration of narcotics to children. Of those, 49 (58%) had intranasal fentanyl as an available medication and delivery route. Of the 84 agencies with offline protocols for pain, 77 (83%) reported familiarity with the EMSC pain protocol guideline. Conclusion: The creation and dissemination of statewide EMSC protocol guidelines results in widespread awareness (91%) and to date 68% of agencies have adopted them. Future investigation into factors associated with protocol adoption should be explored. Background: Intranasal (IN) naloxone is safe and effective for the treatment of opioid overdose. While it has been extensively studied in the out-of-hospital environment in the hands of paramedics and lay people, we are unaware of any studies evaluating the safety and efficacy of IN naloxone administration by BLS providers. In recent years IN naloxone has been added to the BLS armamentarium; however, most services/states require an ALS unit be dispatched and attempt an intercept if IN naloxone is administered by the BLS providers. Objectives: The purpose of this study is to evaluate the safety and effectiveness of BLS-administered IN naloxone in an urban environment. Methods: Retrospective cohort review as part of the ongoing QA process of all patients who had IN naloxone administration by BLS providers. The study was part of a special projects waiver by Massachusetts OEMS from February 2011 through November 2011 in a busy urban tiered EMS system in the metro-Boston area. Exclusion criteria: cardiac arrest. Demographic information was collected, as well as vital signs, number of naloxone doses by BLS, patient response to BLS naloxone administration (clinical improvement in mental status and/or respiratory status), ALS intercept. Descriptive statistics and confidence intervals are reported using Microsoft Excel and SPSS 17.0. Results: Fifty-six cases of BLS-administered IN naloxone were identified, and 2 were excluded as cardiac arrests. The included cases had a mean age of 38.8 years ±13.5 (range 16-82), and 74% (CI 60-85) were male. Of the 54 included cases, 76% (CI 62-87) of patients responded to BLS administration of naloxone. Of the responders, 17% (CI 7-32) required two doses. There were 10 protocol violations representing 19% (CI 9.2-31.4) of the total administrations, however in 100% of these 10 protocol violations the patients had a positive response to the administration of IN naloxone. Seven of the protocol violations were patients who required a second 2 mg dose of naloxone. Eleven cases did not have an ALS intercept; only 1 of these 11 patients did not respond to BLS administration of naloxone. There were no identified adverse events. Conclusion: BLS providers safely and successfuly administered IN naloxone achieving a response rate consistent with studies of ALS providers' administration of IN naloxone. Given the success rate of BLS providers, it may be feasible for BLS to manage responders without the aid of an ALS intercept. Background: An estimated 20% of patients arriving by ambulance to the ED are in moderate to severe pain. However, the management of pain in the prehospital setting has been shown to be inadequate, and untreated pain may have negative consequences for patients. Objectives: To determine if focused education on pediatric pain management and implementation of a pain management protocol improved the prehospital assessment and treatment of pain in adult patients. Specifically, this study aimed to determine if documentation of pain scores and administration of morphine by EMS personnel improved. Methods: This was a retrospective before and after study conducted by reviewing a county-wide prehospital patient care database. The study population included all adult patients transported by EMS between 01 February 2006 and 28 February 2010 with a working assessment of trauma or burn. EMS patient care records were searched for documentation of pain scores and morphine administration 2 years before and 2 years after an intensive pediatric focused pain management education program and implementation of a pain management protocol. Frequencies and 95% CIs were determined for all patients meeting the inclusion criteria in the before and after time period and chisquare was used to compare frequencies between time periods. A secondary analysis was conducted using only subjects documented as meeting the protocol's treatment guidelines. Results: 7,999 (10%) of 77,122 adult patients transported by EMS during the study period met the inclusion criteria: 4,357 in the before and 3,642 in the after period. Subject demographics were similar between the two periods. Documentation of pain score did not change between the time periods ( Background: There is a presumption that ambulance response times affect patient outcome. We sought to determine if shorter response times really make a difference in hospital outcomes. Objectives: To determine if ambulance response time makes a difference in the outcomes of patients transported for two major trauma (motor vehicle crash injuries, penetrating trauma) and two major medical (difficulty breathing and chest pain complaints) emergencies. Methods: This study was conducted in a metropolitan EMS system serving a population total of 800,000 including urban and rural areas. Cases were included if the private EMS service was the first medical provider on scene, the case was priority 1, and the patient was 13 years and older. A 12-month time period was used for the data evaluation. Four diagnoses were examined: motor vehicle crash injuries, penetrating trauma, difficulty breathing, and chest pain complaints. Ambulance response times were assessed for each of the four different complaints. The patients' initial vital signs were assessed and the number of vital signs out of range was recorded. A sampling of all cases which went to the single major trauma center was selected for evaluation of hospital outcome. Using this hospital sample, number of vital signs out of range were assessed as a surrogate marker indicating severity of hospital outcome. Correlation coefficients were used to evaluate interactions between independent and outcome variables. Results: Of the 2164 cases we reviewed over the 12month period, we found that the EMS service responded significantly faster to trauma complaints at 4.53 minutes (n = 254) than medical complaints at 5.92 minutes (n = 1910) . In the hospital sample of 587 cases, number of vital signs out of range were positively correlated with hospital days (r = 0.11), admits (r = 0.12), ICU admits (r = 0.10), and deaths (r = 0.09), but not response times (r = (-)0.08). In the entire sample, there was no correlation between vital signs out of range and response times for any diagnosis (see figure) . Conclusion: Conclusions: Based on our hospital sample which showed that number of vital signs out of range was a surrogate marker of worse hospital outcomes, we find that hospital outcomes are not related to initial response times. Adverse Effects Following Prehospital Use Of Ketamine By Paramedics Eric Ardeel Baylor College of Medicine, Houston, TX Background: Ketamine is widely used across specialties as a dissociative agent to achieve sedation and analgesia. Emergency medical services (EMS) use ketamine to facilitate intubation and pain control, as well as to sedate acutely agitated patients. Published studies of EMS ketamine practice and effects are scarce. Objectives: Describe the incidence of adverse effects occurring after ketamine administration by paramedics treating under a single prehospital protocol. Methods: A retrospective analysis was conducted of 98 consecutive patients receiving prehospital ketamine from paramedics in the suburban/rural EMS system of Montgomery County Hospital District, Texas between August 1, 2010 and October 25, 2011. Ketamine administration indications were: need for rapid control of violent/agitated patients requiring treatment and transport; sedation and analgesia after trauma; facilitation of intubation and mechanical ventilation. Ketamine administration contraindications were: equivalent ends achieved by less invasive means; hypertensive crisis; angina; signs of significantly elevated intracranial pressure; anticipated inability to support or control airway. All patients were included, regardless of indication for ketamine administration. Data were abstracted from electronic patient care records and available continuous physiologic monitoring data, and analyzed for the presence of adverse effects as defined a priori in ''Clinical Practice Guidelines for Emergency Department Ketamine Dissociative Sedation: 2011 Update.'' Results: No patients were identified as experiencing adverse effects as defined by the referenced literature. Ketamine was utilized most often for patients with the following NEMSIS Provider's Primary Impression: 25 (26%) altered level of consciousness, 23 (23%) behavioral/psychiatric, 20 (20%) traumatic injury. Overall, combativeness was associated with 64 (65%) patients. The mean age was 41 years (range 3-94 years) and 50 (51%) were male. The mean ketamine dose was 150 mg (range 25-500 mg) and twenty-four (24%) patients received multiple administrations. Conclusion: In this patient population, our data indicate that prehospital ketamine use by EMS paramedics, across all indications for administration, was safe. Further study of ketamine's utility in EMS is warranted. An Background: Rigorous evaluation of the effect of implementing nationally vetted evidence-based guidelines (EBGs) has been notoriously difficult in EMS. Specifically, human subjects issues and the Health Insurance Portability and Accountability Act (HIPAA) present major challenges to linking EMS data with distal outcomes. Objectives: To develop a model that addresses the human subjects and HIPAA issues involved with evaluating the effect of implementing the Traumatic Brain Injury (TBI) EBGs in a statewide EMS system. Methods: The Excellence in Prehospital Injury Care (EPIC) Project is an NIH-funded evaluation of the effect of implementing the EMS TBI Guidelines throughout Arizona (NINDS-1R01NS071049-01A1). To accomplish this, a partnership was developed between the Arizona Department of Health Services (ADHS), the University of Arizona, and more than 100 EMS agencies that serve approximately 85% of the state's population. Results: EBG implementation: Implementation follows all routine regulatory processes for making changes in EMS protocols. In Arizona, the entire project must be carried out under the authority of the ADHS Director. Evaluation: A before-after system design is used (randomization is not acceptable). HIPAA: As an ADHSapproved public health initiative, EPIC is exempt from HIPAA, allowing sharing of Protected Health Information between participating entities. For EPIC, the State Attorney General provided official verification of HI-PAA exemption, thus allowing direct linkage of EMS and hospital data. IRB: Once EPIC was officially deemed a public health initiative, the university IRB process was engaged. As an officially sanctioned public health project, EPIC was determined to not be human subjects research. This allows the project to implement and evaluate the effect of this initiative without requiring individual informed consent. Conclusion: By utilizing an EMS-Public Health-University partnership, the ethical and regulatory challenges related to evaluating implementation of new EBGs can be successfully overcome. The integration of the Department of Health, the Attorney General, and the university IRB can properly protect citizens while permitting efficient implementation and rigorous evaluation of the effect of EBGs. This novel approach may be useful as a model for evaluation of implementing EMS EBGs in other states and large counties. (20.6%-58.1% by age) were transported to non-trauma centers. The most common reasons cited by EMS for hospital selection were: patient preference (50.6%), closest facility (20.7%), and specialty center (15.2%). Patient preference increased with age (p for trend 0.0001) and paralleled under-triage ( Figure 1 ). ISS ‡ 16 patients transported to non-trauma hospitals by patient request had lower unadjusted mortality (3.8%, 95%CI 1.9-5.8) than similar patients transported to trauma centers (11.8%, 95%CI 10.7-12.8) or transported for other reasons (12.6%, 95%CI 11.4-13.7) (Figure 2) . Under-triage appears to be influenced by patient preference and age. Self-selection for transport to non-trauma centers may result in under-triaged patients with inherently better prognosis than triagepositive patients. Background: Only 25% of all out-of-hospital cardiac arrest (OHCA) patients receive bystander CPR (cardiopulmonary resuscitation). The neighborhood in which an OHCA occurs has significant influence on the likelihood of receiving bystander CPR. Objectives: To utilize Geographic Information Systems to identify ''high-risk'' neighborhoods, defined as census tracts with high incidence of OHCA and low CPR prevalence. Methods: Design: Secondary analysis of the Cardiac Arrest Registry to Enhance Survival (CARES) dataset for Denver County, Colorado. Population: All consecutive adults (>18 years old) with OHCA due to cardiac etiology from January 1, 2009 through December 31, 2010. Data Analysis: Analyses were conducted in Arc-GIS. Three spatial statistical methods were used: Local Morans I (LMI), Getis-Ord Gi*(Gi*), and Spatial Empirical Bayes (SEB) adjusted rates. Census tracts with high incidence of OHCA, as identified by all three spatial statistical methods, were then overlain with low bystander CPR census tracts, which were identified in at least two out of three statistical methods (LMI, Gi*, or the lowest quartile of bystander CPR prevalence). Overlapping census tracts identified with both high OHCA incidence and low CPR prevalence were designated as ''highrisk''. Results: A total of 728 arrests in 142 census tracts occurred during the study period, with 595 arrests included in final sample. Events were excluded if they were unable to be geocoded (n = 41), outside Denver County (n = 8), or occurred in a jail (n = 3), hospital/ physician's office (n = 7), or nursing home (n = 74). For high OHCA incidence: LMI identified 29 census tracts, Gi* identified 45 census tracts, and the SEB method identified 28 census tracts. Twenty-five census tracts were identified by all three methods. For low bystander CPR prevalence: LMI identified 9 census tracts, Gi* identified 16 census tracts, and 101 census tracts were identified as being in the lowest quartile of CPR prevalence. Twenty-four census tracts were identified by two of the three methods. Two census tracts were identified as high-risk having both high OHCA incidence and low CPR prevalence (Figure) . High-risk census tract demographics as compared to Denver County are shown in the Table. Conclusion: The two high-risk census tracts, comprised of minority and low-income populations, appear to be possible sites for targeted community-based CPR interventions. Objectives: We sought to assess the accuracy and correlation of geographic information system (GIS) derived transport time compared to actual EMS transport time in OHCA patients. Methods: Prospective, observational cohort analysis of OHCA patients in Vancouver, B.C., one of the sites of the Resuscitation Outcomes Consortium (ROC). A random sample from all of the OHCA cases from 12/05 through 05/07 was selected for analysis from one site of the ROC Epistry. Using GIS, EMS transport time was derived from reported latitude/longitude coordinates of the OHCA event to the actual receiving hospital. This was calculated via the actual network distance using ArcGIS. This GIS-derived time was then compared to the actual EMS transport time (in minutes) using the Wilcoxon signed rank test. Scatter plot analysis of actual vs. GIS times were created to evaluate the relationship between actual and calculated time. A linear regression model predicting actual EMS transport time from the derived GIS-time was also developed in order to examine the potential relationship between the two variables. Differences in the relationship were also investigated based on time of the day to reflect varying traffic conditions. Results: 641 cases were randomly selected for analysis. The median actual transport time was significantly longer than the median GIS derived transport time (7.08 minutes vs. 5.50 minutes). Scatter plot analysis did not reveal any significant correlation between actual and GIS-based time. Additionally, there was poor approximation of GIS-based time and actual EMS time (R 2 = 0.20) with no evidence of a significant linear relationship between the two. The poorest correlation of time was observed during the morning hours (07:00-09:00; R 2 = 0.02) while the strongest correlation was during the overnight hours (00:00-07:00; R 2 = 0.26). Conclusion: GIS derived time does not appear to correlate well with actual EMS transport time of OHCA patients. Efforts should be made to accurately obtain actual EMS transport times for OHCA patients. Objectives: We first sought to describe the incidence of OHCA presenting to the ED. We then sought to determine the association between hospital characteristics and survival to hospital admission. Methods: We identified patients with diagnoses of cardiac arrest or ventricular fibrillation (ICD-9 427.5 or 427.41) in the 2007 Nationwide Emergency Department Sample, a nationally representative estimate of all ED admissions in the US. EDs reporting ‡1 patient with OHCA were included. Our primary outcome was survival to hospital admission. We examined variability in hospital survival rate and also classified hospitals into high or low performers based on median survival rate. We used this dichotomous hospital level outcome to examine factors associated with survival to admission including hospital and patient demographics, ED volume, cardiac arrest volume, and cardiac catheterization availability. All unadjusted and adjusted analyses were performed using weighted statistics and logistic regressions. Results: Of the 966 hospitals, 949 (98.2%) were included. In total, 44,782 cases of cardiac arrest were identified, representing an estimated 203,331 cases nationally. Overall ED OHCA survival to hospital admission was 23.5% (IQR 0.1%, 29.4%) In adjusted analyses, increased survival to admission was seen in hospitals with teaching status (OR 2.7, 95% CI 1.7-4.4, p < 0.001), annual ED visits ‡10,000 (OR 3.9, 95% CI 2.5-6.1, p < 0.001), and PCI capability (OR 9.1, 95% CI 1.2-68.2, p = 0.032). In separate adjusted analyses including teaching status and PCI capabilities, hospitals with >40 annual cardiac arrest cases (OR 3.0, 95% CI 2.2-4.2, p < 0.001) were also shown to have improved survival (Figure) . Conclusion: ED volume, cardiac arrest volume, and PCI capability were associated with improved survival to hospital admission in patients presenting to the ED after OHCA. An improved understanding of the contribution of ED care to OHCA survival may be useful in guiding the regionalization of cardiac arrest care. Background: Prior investigations have demonstrated regional differences in out-of-hospital cardiac arrest (OHCA) outcomes, but none have evaluated survival variability by hospital within a single major US city. Objectives: We hypothesized that 30-day survival from OHCA would vary considerably among one city's receiving hospitals. Methods: We performed a retrospective review of prospectively collected cardiac arrest data from a large, urban EMS system. Our population included all OHCAs with a recorded social security number (which we used to determine 30-day survival through the Social Security Death Index) that were transported to a hospital between 1/1/2008 and 12/31/2010. We excluded traumatic arrests, pediatric arrests, and hospitals receiving less than 10 OHCAs with social security numbers over the three-year study period. We examined the associa-tion between receiving hospital and 30-day survival. Additional variables examined included: Level I trauma center status, teaching hospital status, OHCA volume, and whether post-arrest therapeutic hypothermia (TH) protocols were in place in 2008. Statistics were performed using chi-square tests and logistic regression. Results: Our study population comprised 550 arrest cases delivered to 18 unique hospitals with an overall 30-day survival of 14.4%. Mean age was 69.0 (SD 16.2) years. Males comprised 54.2% of the cohort; 53.3% of victims were black. Thirty-day survival varied significantly among the hospitals, ranging from 4.8% to 35.0% (chi-square 32.3, p = 0.014). OHCAs delivered to Level I trauma centers were significantly more likely to survive (19.5% vs. 12.7%, p = 0.05), as were those delivered to hospitals known to offer post-arrest TH (19.2% vs. 11.8%, p = 0.018). Hospital teaching status and OHCA volume were not associated with survival. Conclusion: There was significant variability in OHCA survival by hospital. Patients were significantly more likely to survive if transported to a Level I trauma center or hospital with post-arrest TH protocols, suggesting a potential role for regionalization of OHCA care. Limiting our population to OHCAs with recorded social security numbers reduced our power and may have introduced selection bias. Further work will include survival data on the complete set of OHCAs transported to hospitals during the three-year study period. Background: Traumatic brain injury is a leading cause of death and disability. Previous studies suggest that prehospital intubation in patients with TBI may be associated with mortality. Limited data exist comparing prehospital (PH) nasotracheal (NT), prehospital orotracheal (OT), and ED OT intubation and mortality following TBI. Objectives: To estimate the associations between PH NT, PH OT, and ED OT intubation and in-hospital mortality in patients with moderate to severe TBI, with hypotheses that PH NT and PH OT intubation would be associated with increased mortality when compared to ED OT or no intubation. Methods: An analysis using the Denver Health Trauma Registry, a prospectively collected database. Consecutive adult trauma patients from 1995-2008 with moderate to severe TBI defined as head Abbreviated Injury Scale (AIS) scores of 2-5. Structured chart abstraction by blinded physicians was used to collect demographics, injury and prehospital care characteristics, intubation status and timing, in-hospital mortality and survival time, and neurologic function at discharge. Poor neurologic function was defined as Cerebral Performance Category score of 3-5. Multivariable logistic regression and survival analyses were performed, using multiple imputation for missing data. Results: Of the 3,517 patients, the median age was 38 (IQR 27-51) years. The median PH GCS was 14 (IQR 6-15), median Injury Severity Score was 20 (IQR 13-29), and median head AIS was 4 (IQR 3-5). PH NT occurred in 15.8%, PH OT in 9.5%, and ED OT in 17.4%, while mortality occurred in 17.5%. The 24-, 48-, and 72-hour survival analyses are outlined in the Table. Survival curves for PH NT, PH OT, and ED OT are demonstrated in the Figure (p < 0.001) . Conclusion: Prehospital intubation in patients with moderate to severe TBI is associated with increased mortality. Contrary to our initial hypothesis, there was also a significant association between ED intubation and mortality. These associations persisted despite survival time, and while adjusting for injury severity. Background: SBDP150 is a breakdown product of the cytoskeletal protein alpha-II-spectrin found in neurons and has been detected in severe TBI. Objectives: This study examined whether early serum levels of SBDP150 could distinguish: 1) mild TBI from three control groups; 2) those with and without traumatic intracranial lesions on CT (+CT vs -CT); and 3) those having a neurosurgical intervention (+NSG vs -NSG) in mild and moderate TBI (MMTBI). Methods: This prospective cohort study enrolled adult patients presenting to two Level I trauma centers following MMTBI with blunt head trauma with loss of consciousness, amnesia, or disorientation and a GCS 9-15. Control groups included uninjured controls and trauma controls presenting to the ED with orthopedic injuries or an MVC without TBI. Mild TBI was defined as GCS 15 and moderate TBI as having a GCS <15. Blood samples were obtained in all patients within 4 hours of injury and measured by ELISA for SBDP150 (ng/ml). The main outcomes were: 1) the ability of SBDP150 to distinguish mild TBI from three control groups; 2) to distinguish +CT from -CT and; 3) to distinguish +NSG from -NSG. Data were expressed as means with 95%CI, and performance was tested by ROC curves (AUC and 95%CI). Results: There were 275 patients enrolled: 54 TBI patients (42 GCS 15, 12 GCS 9-14), 23 trauma controls (16 MVC controls and 7 orthopedic controls), and 198 uninjured controls. The mean age of TBI patients was 39 years (range 19-70) with 63% males. Fourteen (14%) had a +CT and 9% had +NSG. Mean serum SBDP150 levels were 0.764 (95%CI 0.561-0.968) in normal controls, 1.035 (0.091-2.291) in orthopedic controls, 1.209 (0.236-2.181 ) in MVC controls, 2.764 (1.700-3.827 ) in mild TBI with GCS 15, and 5.227 (0.837-9.617) in TBI with GCS 9-14 (P < 0.001). The AUC for distinguishing mild TBI from both controls was 0.83 (95%CI 0.68-0.99). Mean SBDP150 levels in patients with -CT versus +CT were 2.170 (1.340-3.000) and 6.797 (2.227-11.368) respectively (P < 0.001) with AUC = 0.78 (95%CI 0.61-0.95). Mean SBDP150 levels in patients with -NSG versus +NSG were 2.492 (1.391-3.593) and 6.867 (3.891-9.843) respectively (P < 0.001) with AUC = 0.88 (95%CI 0.77-0.98). Conclusion: Serum SBDP150 levels were detectable in serum acutely after injury and were associated with measures of injury severity including CT lesions and neurosurgical intervention. Further study is required to validate these findings before clinical application. Utility of Platelet Background: Pre-injury use of anti-platelet agents (e.g., clopidogrel and aspirin) is a risk factor for increased morbidity and mortality in patients with traumatic intracranial hemorrhage (tICH). Some investigators have recommended platelet transfusion to reverse the anti-platelet effects in tICH. Objectives: This evidence-based medicine review examines the evidence regarding the effect of platelet transfusion in emergency department (ED) patients with pre-injury anti-platelet use and tICH on patientoriented outcomes. Methods: The MEDLINE, EMBASE, Cochrane Library, and other databases were searched. Studies were selected for inclusion if they compared platelet transfusion to no platelet transfusion in the treatment of adult ED patients with pre-injury anti-platelet use and tICH, and reported rates of mortality, neurocognitive function, or adverse effects as outcomes. We assessed the quality of the included studies using ''Grading of Recommendations Assessment, Development and Evaluation'' (GRADE) criteria. Categorical data are presented as percentages with 95% confidence interval (CI). Relative risks (RR) are reported when clinically significant. Results: Five retrospective, registry-based studies were identified, which enrolled 635 patients cumulatively. Based on standard criteria, three studies were of ''low'' quality evidence and two studies had ''very low'' qualities. One study reported higher in-hospital mortality in patients with platelet transfusion (Ohm et al), another showed a lower mortality rate in patients receiving platelet transfusion (Wong et al). Three studies did not show any statistical difference in comparing mortality rates between the groups (Table) . No studies reported intermediate-or long-term neurocognitive outcomes or adverse events. Conclusion: Five retrospective registry studies with suboptimal methodologies provide inadequate evidence to support the routine use of platelet transfusion in adult ED patients with pre-injury anti-platelet use and tICH. Abnormal Levels of End-Tidal Carbon Dioxide (ETCO 2 ) Are Associated with Severity of Injury in Mild and Moderate Traumatic Brain Injury (MMTBI) Linda Papa 1 , Artur Pawlowicz 2 , Carolina Braga 1 , Suzanne Peterson 1 , Salvatore Silvestri 1 1 Orlando Regional Medical Center, Orlando, FL; 2 University of Central Florida, Orlando, FL Background: Capnography is a fast, non-invasive technique that is easily administered and accurately measures exhaled ETCO 2 concentration. ETCO 2 levels respond to changes in ventilation, perfusion, and metabolic state, all of which may be altered following TBI. Objectives: This study examined the relationship between ETCO 2 levels and severity of TBI as measured by clinical indicators including Glasgow Coma Scale (GCS) score, computerized tomography (CT) findings, requirement of neurosurgical intervention, and levels of a serum biomarkers of glial damage. Methods: This prospective cohort study enrolled adult patients presenting to a Level I trauma center following a MMTBI defined by blunt head trauma followed by loss of consciousness, amnesia, or disorientation and a GCS 9-15. ETCO 2 measurements were recorded from the prehospital and emergency department records and compared to indicators of TBI severity. Results: Of the 46 patients enrolled, 21 (46%) had a normal ETCO 2 level and 25 (54%) had an abnormal ETCO 2 level. The mean age of enrolled patients was 40 (range 19-70) and 32 (70%) were male. Mechanisms of injury included motor vehicle collision in 19 (41%), motor cycle collision in 9 (20%), fall in 8 (17%), bicycle/ pedestrian struck in 8 (17%), and other in 2 (4%). Eight (17%) patients had a GCS 9-12 and 38 (83%) had a GCS 13-15. Of the 11 (24%) patients with intracranial lesions on CT, 10 (91%) had an abnormal ETCO 2 level (p = 0.006). Of the 5 (11%) patients who required a neurosurgical intervention, 100% had an abnormal ETCO 2 level (p = 0.05). Levels of a biomarker indicative of astrogliosis were significantly higher in those with abnormal ETCO 2 compared to those with a normal ETCO 2 (p = 0.026). Conclusion: Abnormal levels of ETCO 2 were significantly associated with clinical measures of brain injury severity. Further research with a larger sample of MMTBI patients will be required to better understand and validate these findings. Background: Acetaminophen (APAP) poisoning is the most frequent cause of acute hepatic failure in the US. Toxicity requires bioactivation of APAP to toxic metabolites, primarily via CYP2E1. Children are less susceptible to APAP toxicity; one current theory is that children's conjugative pathway (sulfonation) is more active. Liquid APAP preparations contain propylene glycol (PG), a common excipient that inhibits APAP bioactivation and reduces hepatocellular injury in vitro and in rodents. CYP2E1 inhibition may decrease toxicity in children, who tend to ingest liquid APAP preparations, and suggests a potential novel therapy. Objectives: To compare phase I (toxic) and phase II (conjugative) metabolism of liquid versus solid prepara-tions of APAP. We hypothesize that ingestion of a liquid APAP preparation results in decreased production of toxic metabolites relative to a solid preparation, likely due to the presence of PG in the liquid preparations. Methods: DESIGN-pharmacokinetic cross-over study. SETTING-University hospital clinical research center. SUBJECTS-Adults ages 18-40 taking no chronic medications. INTERVENTIONS-Subjects were randomized to receive a 15 mg/kg dose of a commercially available solid or liquid APAP preparation. After a washout period of greater than 1 week, subjects received the same dose of APAP in the alternate preparation. APAP, APAP-glucuronide and APAP-sulfate (phase 2 metabolites), APAP-cysteinate and APAP-mercapturate (phase 1 metabolites) were analyzed via LC/MS in plasma over 8 hours. Peak concentrations and measured AUC were compared using paired-sample T-tests. Plasma PG levels were measured. Results: Fifteen subjects completed the protocol. Peak concentrations and AUCs of the CYP2E1 derived toxic metabolites were significantly lower following ingestion of the liquid preparation (Table, Figure) . The glucuronide and sulfate metabolites were not different. PG was present following ingestion of liquid but not solid preparations. Conclusion: Ingestion of liquid relative to solid preparations in therapeutic doses results in decreased plasma levels of toxic APAP metabolites. This may be due to inhibition of CYP2E1 by PG, and may explain the decreased susceptibility in children. A less hepatotoxic formulation of APAP can potentially be developed if co-formulated with a CYP2E1 inhibitor. Background: Pressure immobilization bandages have been shown to delay mortality for up to 8 hours after coral snake envenomation, providing an inexpensive and effective treatment when antivenin is not readily available. However, long-term efficacy has not been established. Objectives: Determine if pressure immobilization bandages, consisting of an ace wrap and splint, can delay morbidity and mortality from coral snake envenomation, even in the absence of antivenin therapy. Methods: Institutional Animal Care and Use Committee approval was obtained. This was a randomized, observational pilot study using a porcine model. Ten pigs (17.3 kg to 25.6 kg) were sedated and intubated for 5 hours. Pigs were injected subcutaneously in the left distal foreleg with 10 mg of lyophilized M. fulvius venom resuspended in water, to a depth of 3 mm. Pigs were randomly assigned to either a control group (no compression bandage and splint) or a treatment group (compression bandage and splint) approximately 1 minute after envenomation. Pigs were monitored daily for 21 days for signs of respiratory depression, decreased oxygen saturations, and paresis/paralysis. In case of respiratory depression, pigs were euthanized and time to death recorded. Chi-square was used to compare rates of survival up to 21 days and a Kaplan-Meier survival curve constructed. Results: Average survival time of control animals was 412 ± 90 minutes compared to 12,642 ± 7,132 minutes for treated animals. Significantly more pigs in the treatment group survived to 24 hours than in the control group (p = 0.03). Two of the treatment pigs survived to the endpoint of 21 days, but showed necrosis of the distal lower extremity. Conclusion: Long-term survival after coral snake envenomation is possible in the absence of antivenin with the use of pressure immobilization bandages. The applied pressure of the bandage is critical to allowing survival without secondary consequences (i.e. necrosis) of envenomation. Future studies should be designed to accurately monitor the pressures applied. Background: Patients exposed to organophosphate (OP) compounds demonstrate a central apnea. The Kölliker-fuse nuclei (KF) are cholinergic nuclei in the brainstem involved in central respiratory control. Objectives: We hypothesize that exposure of the KF is both necessary and sufficient for OP-induced central apnea. Methods: Anesthetized and spontaneously breathing Wistar rats (n = 24) were exposed to a lethal dose of dichlorvos using three experimental models. Experiment 1 (n = 8) involved systemic OP poisoning using subcutaneous (SQ) dichlorvos (100 mg/kg or 3x LD50). Experiment 2 (n = 8) involved isolated poisoning of the KF using stereotactic microinjections of dichlorvos (625 micrograms in 50 microliters) into the KF. Experiment 3 (n = 8) involved systemic OP poisoning with isolated protection of the KF using SQ dichlorvos (100 mg/kg) and stereotactic microinjections of organophosphatase A (OpdA), an enzyme that degrades dichlorvos. Respiratory and cardiovascular parameters were recorded continuously. Histological verification of injection site was performed using KMnO4 injections. Animals were followed post-poisoning for 1 hour or death. Betweengroup comparisons were performed using a repeated measured ANOVA or Student's t-test where appropriate. Results: Animals poisoned with SQ dichlorvos demonstrated respiratory depression starting 5.1 min post exposure, progressing to apnea 15.9 min post exposure. There was no difference in respiratory depression between animals with SQ dichlorvos and those with dichlorvos microinjected into the KF. Despite differences in amount of dichlorvos (100 mg/kg vs 1.8 mg/kg) and method of exposure (SQ vs CNS microinjection), 10 min following dichlorvos both groups (SQ vs microinjection respectively) demonstrated a similar percent decrease in respiratory rate (51.5 vs 72.2, p = 0.14), minute ventilation ( Background: Patients sustaining rattlesnake envenomation often develop thrombocytopenia, the etiology of which is not clear. Laboratory studies have demonstrated that venom from several species, including the Mojave rattlesnake (Crotalus scutulatus scutulatus), can inhibit platelet aggregation. In humans, administration of crotaline Fab antivenom (AV) has been shown to result in transient improvement of platelet levels; however, it is not known whether platelet aggregation also improves after AV administration. Objectives: To determine the effect of C. scutulatus venom on platelet aggregation in vitro in the presence and absence of crotaline Fab antivenom. Methods: Blood was obtained from four healthy male adult volunteers not currently using aspirin, NSAIDs, or other platelet-inhibiting agents. C. scutulatus venom from a single snake with known type B (hemorrhagic) activity was obtained from the National Natural Toxins Research Center. Measurement of platelet aggregation by an aggregometer was performed using five standard concentrations of epinephrine (a known platelet aggregator) on platelet-rich plasma over time, and a mean area under the curve (AUC) was calculated. Five different sample groups were measured: 1) blood alone; 2) blood + C. scutulatus venom (0.3 mg/mL); 3) blood + crotaline Fab AV (100 mg/mL); 4) blood + venom + AV (100 mg/ mL); 5) blood + venom + AV (4 mg/mL). Standard errors of the mean (SEM) were calculated for each group. Results: Antivenom administration by itself did not significantly affect platelet aggregation compared to baseline (103.8 ± 3.4%, p = 0.47). Administration of venom decreased platelet aggregation (72.0 ± 8.5%, p < 0.05). Concentrated AV administration in the presence of venom normalized platelet aggregation (101.4 ± 6.8%) and in the presence of diluted AV significantly increased aggregation (133.9 ± 9.0%); p < 0.05 for both groups when compared to the venom-only group. To control for the effects of the venom and AV, each was run independently in platelet-rich plasma without epinephrine; neither was found to significantly alter platelet aggregation. Conclusion: Crotaline Fab AV improved platelet aggregation in an in vitro model of platelet dysfunction induced by venom from C. scutulatus. The mechanism of action remains unclear but may involve inhibition of venom binding to platelets or a direct action of the antivenom on platelets. Background: Routine use of both breathalyzers and hand sanitizers is common across emergency depart-ments. The most common hand sanitizer on the market, Purell, contains 62% ethyl alcohol and a lesser amount of isopropyl alcohol. Previous investigations have documented that risk is low to the health care worker who applies frequent hand sanitizers to themselves. However, it is unknown whether this alcohol mixture causes false readings on a breathalyzer machine being used to determine alcohol levels on others. Objectives: To determine the effect on the measurement of breathalyzer readings in individuals who have not consumed alcohol after hand sanitizer is applied to the experimenter holding a breathalyzer machine. Methods: After obtaining informed consent, a breathalyzer reading was obtained in participants who had not consumed any alcohol in the last 24 hours. Three different experiments were performed with 25 different participants in each. In Experiment 1, two pumps of hand sanitizer were applied to the experimenter. Without allowing the sanitizer to dry, the experimenter then measured the breathalyzer reading of the participant. In Experiment 2, one pump of sanitizer was applied to the experimenter. Measurements of the participant were taken without allowing the sanitizer to dry. In Experiment 3, one pump of sanitizer was placed on the experimenter and rubbed until dry according to the manufacturer's recommendations. Readings were recorded and analyzed using paired t-tests. Results: The initial breathalyzer reading for all participants was 0. After two pumps of hand sanitizer were applied without drying (Experiment 1), breathalyzers ranged from 0.02 to 0.17, with a mean above the legalintoxication limit of 0.11 (t(24) = )15.3, p < 0.001). After one pump of hand sanitizer was applied without drying (Experiment 2), breathalyzers ranged from 0.02 to 0.11, with a mean of 0.06 (t(24) = )14.1, p < 0.001). After one pump of hand sanitizer was applied according to manufacturer's directions (Experiment 3), breathalyzers ranged from 0.0 to 0.02 with a mean of 0.01 (t(24) = )5.1, p < 0.001). Conclusion: Use of hand sanitizer according to the manufacturer's recommendations results in a small but significant increase in breathalyzer readings. However, the improper and overuse of common hand sanitizer elevates routine breathalyzer readings, and can mimic intoxication in individuals who have not consumed alcohol. Stephanie Carreiro, Jared Blum, Francesca Beaudoin, Gregory Jay, Jason Hack Objectives: The primary aim of this study is to determine if pretreatment with ILE affects the hemodynamic response to epinephrine in a rat model. Hemodynamic response was measured by a change in heart rate (HR) and mean arterial pressure (MAP). We hypothesized that ILE would limit the rise in MAP and HR that typically follow epinephrine administration. Methods: Twenty male Sprague Dawley rats (approximately 7-8 weeks of age) were sedated with isoflurane and pretreated with a 15 mL/kg bolus of ILE or normal saline, followed by a 15 mcg/kg dose of epinephrine intravenously. Intra-arterial blood pressure and HR were monitored continuously until both returned to baseline (Biopaq). A multifactorial analysis of variance (MANOVA) was performed to assess the difference in MAP and HR between the two groups. Standardized t-tests were then used to compare the peak change in MAP, time to peak MAP, and time to return to baseline MAP in the two groups. Results: Overall, a significant difference was found between the two groups in MAP (p = 0.01) but not in HR (p = 0.34). There was a significant difference (p = 0.023) in time to peak MAP in the ILE group (54 sec, 95% CI 44-64) versus the saline group (40 sec, 95% CI 32-48) and a significant difference (p = 0.004) in time to return to baseline MAP in ILE group (171 sec, 95% CI 148-194) versus the saline group (130 sec, 95% CI 113-147). There was no significant difference (p = 0.28) in the peak change in MAP of the ILE group (75.4, mmHg, 95% CI 66-85) versus the saline group (69.9 mmHg, 95% CI 64-76). Conclusion: Our data show that in this rat model ILE pretreatment leads to a significant difference in MAP response to epinephrine, but no difference in HR response. ILE delayed the peak effect and prolonged the duration of effect on MAP but did not alter the peak increase in MAP. This suggests that the use of ILE may delay the time to peak effect of epinephrine if the drugs are administered concomitantly to the same patient. Further research is needed to explore the mechanism of this interaction. Rasch Analysis of the Agitation Severity Scale when Used with Emergency Department Acute Psychiatry Patients Tania D. Strout, Michael R. Baumann Maine Medical Center, Portland, ME Background: Agitation is a frequently observed and problematic phenomenon in mental health patients being treated in the emergency setting. The Agitation Severity Scale (AgSS), a reliable and valid instrument, was developed using classical test theory to measure agitation in acute psychiatry patients. Objectives: The aim of this study was to analyze the AgSS according to the Rasch measurement model and use the results to determine whether improvements to the instrument could be made. Methods: This prospective, observational study was IRB-approved. 270 adult ED patients with psychiatric chief complaints and DSM-IV-TR diagnoses were observed using the AgSS. The Rasch rating scale model was employed to evaluate the 17 items comprising the AgSS using WINSTEPS statistical software. Unidimensionality, item fit, response category performance, person and item separation reliability, and hierarchical ordering of items were all examined. A principle components analysis (PCA) of the Rasch residuals was also performed. Results: Variable maps revealed that all of the AgSS items were used to some degree and that the items were ordered in a way that makes clinical sense. Several duplicative items, indicating the same degree of agitation, were identified. Item (5.19) and person (2.01) separation statistics were adequate, indicating appropriate spread of items and subjects along the agitation continuum and providing support for the instrument's reliability. Keymaps indicated that the AgSS items are functioning as intended. Analysis of fit demonstrated no extreme misfitting items. PCA of the Rasch residuals revealed a small amount of residual variance, but provided support for the AgSS as being unidimensional, measuring the single construct of agitation. The results of this Rasch analysis support the AgSS as a psychometrically robust instrument for use with acute psychiatry patients in the emergency setting. Several duplicative items were identified that may be eliminated and re-evaluated in future research; this would result in a shorter, more clinically useful scale. In addition, a gap in items for patients with lower levels of agitation was identified. Generation of additional items intended to measure low levels of agitation could improve clinician's ability to differentiate between these patients. Background: Attempted suicide is one of the strongest clinical predictors of subsequent suicide and occurs up to 20 times more frequently than completed suicide. As a result, suicide prevention has become a central focus of mental health policy. In order to improve current treatment and intervention strategies for those presenting with suicide attempt and self-injury in the emergency department (ED), it is necessary to have a better understanding of the types of patients who present to the ED with these complaints. Objectives: To describe the epidemiology of ED visits for attempted suicide and self-inflicted injury over a 16year period. Methods: Data were obtained from the National Hospital Ambulatory Medical Care Survey (NHAMCS). All visits for attempted suicide and self-inflicted injury (E950-E959) during 1993-2008 were included. Trend analyses were conducted using STATA's nptrend (a nonparametric test for trends that is an extension of the Wilcoxon rank-sum test) and regression analyses. A two-tailed P < 0.05 was considered statistically significant. Results: Over the 16-year period, there were an average of 420,000 annual ED visits for attempted suicide and self-inflicted injury (1.50 [95% confidence interval (CI) 1.33-1.67] visits per 1,000 US population). The overall mean patient age was 31 years, with visits most common among ages 15-19 (3.70; 95%CI 3.11-4.30). The average annual number of ED visits for suicide attempt and self-inflicted injury more than doubled from 244,000 in 1993-1996 to 538,000 in 2005-2008. During the same timeframe, ED visits for these injuries per 1,000 US population almost doubled for males (0.84 to 1.62), females (1.04 to 1.96), whites (0.94 to 1.82), and blacks (1.14 to 2.10). No temporal differences were found for method of injury or ED disposition; there was, however, a significant decrease in visits determined by the physician to be urgent/emergent from 95% in 1993 to 70% in 2008. Conclusion: ED visit volume for attempted suicide and self-inflicted injury has increased over the past two decades in all major demographic groups. Awareness of these longitudinal trends may assist efforts to increase research on suicide prevention. In addition, this information may be used to inform current suicide and self-injury related ED interventions and treatment programs. Benjamin L. Bregman, Janice C. Blanchard, Alyssa Levin-Scherz George Washington University, Washington, DC Background: The Emergency department (ED) has increasingly become a health care access point for individuals with mental health needs. Recent studies have found that rates of Major Depression Disorder (MDD) diagnosed in EDs are far above the national average. We conducted a study assessing whether individuals with frequent ED visits had higher rates of MDD than those with fewer ED visits in order to help guide screening and treatment of depressed individuals encountered in the ED. Objectives: This study evaluated potential risk factors associated with MDD. We hypothesized that patients who are frequent ED visitors will have higher rates of MDD. Methods: This was a single center, prospective, crosssectional study. We used a convenience sample of noncritically ill, English speaking adult patients presenting with non-psychiatric complaints to an urban academic ED over 6 months in 2011. We oversampled patients presenting with ‡3 visits over the previous 364 days. Subjects were surveyed about their demographic and other health and health care characteristics and were screened with the PHQ 9, a nine-item questionnaire that is a validated, reliable predictor of MDD. We conducted bivariate (chi-square) and multivariate analysis controlling for demographic characteristics using STA-TA v. 10.0. Our principal dependent variable of interest was a positive depression screen (PHQ 9 score ‡10). Our principal independent variable of interest was ‡3 visits over the previous 364 days. Results: Our response rate was 90.7% with a final sample size of 1012. Of our total sample, 313 (30.9%) had three or greater visits within the prior 364 days. One hundred (32%) frequent visitors had a positive PHQ 9 MDD screen as compared to 142 (20.3%) of subjects with fewer than three visits (p < 0.0001). In our multivariate analysis, the odds for having three or more visits for subjects who had a positive depression screen was 1.42 (1.03, 1.97). Of subjects with three or more visits with a positive depression screen, only 116 (37%) were actively being treated for MDD at the time of their visit. Conclusion: Our study found a high prevalence of untreated depression among frequent users of the ED. EDs should consider routinely screening patients who are frequent consumers for MDD. In addition, further studies should evaluate the effect of early treatment and follow up for MDD on overall utilization of ED services. Access to Psychiatric Care Among Patients with Depression Presenting to the Emergency Department Janice C. Blanchard, Benjamin L. Bregman, Dana Rosenfarb, Qasem Al Jabr, Eun Kim George Washington University, Washington, DC Background: Literature suggests that there is a high rate of Major Depressive Disorder (MDD) in emergency department (ED) users. However, access to outpatient mental health services is often limited due to lack of providers. As a result, many persons with MDD who are not in active treatment may be more likely to utilize the ED as compared to those who are currently undergoing outpatient treatment. Objectives: Our study evaluated utilization rates and demographic characteristics associated with patients with a prior diagnosis of MDD not in active treatment. We hypothesized that patients who present to the ED with untreated MDD will have more frequent ED visits. Methods: This was a single center, prospective, crosssectional study. We used a convenience sample of noncritically ill, English speaking adult patients presenting with non-psychiatric complaints to an urban academic ED over 6 months in 2011. Subjects were surveyed about their demographic and other health and health care characteristics and were screened with the PHQ 9, a nine-item questionnaire that is a validated, reliable predictor of MDD. We conducted bivariate (chi-square) and multivariate analysis controlling for demographic characteristics using STATA v. 10.0. Our principal dependent variable of interest was a positive depression screen (PHQ 9 ‡ 10). Our analysis focused on the subset of patients with a prior diagnosis of MDD with a positive screen for MDD during their ED visit. Results: Our response rate was 90.7% with a final sample size of 1012. 243 (24.0%) patients screened positive for MDD with a PHQ 9 Score ‡10. Of the 243 patients with a positive depression screen, 55.1% reported a prior history of treatment for MDD (n = 134). Of these patients, only 57.6% were currently actively receiving treatment. Hispanics who screened positive for depression with a history of MDD were less likely to actively be undergoing treatment as compared to non-Hispanics (22.2% versus 46.9%, p = 0.041). Patients with incomes less than $20,000 were more likely to actively be receiving treatment as opposed to higher incomes (76.3% versus 42.7% p = 0.003). Conclusion: Patients presenting to our ED with untreated MDD are more likely to be Hispanic and less likely to be low income. The emergency department may offer opportunities to provide antidepressant treatment for patients who screen positive for depression but who are not currently receiving treatment. Evaluation of a Two-Question Screening Tool (PHQ-2) for Detecting Depression in Emergency Department Patients Jeffrey P. Smith, Benjamin Bregman, Janice Blanchard, Nasser Hashim, Mary Pat McKay George Washington University, Washington, DC Background: The literature suggests there is a high rate of undiagnosed depression in ED patients and that early intervention can reduce overall morbidity and health care costs. There are several well validated screening tools for depression including the nine-item Patient Health Questionnaire (PHQ-9). A tool using a two-question subset, the PHQ-2, has been shown to be an easily administered, reasonably sensitive screening tool for depression in primary care settings. Objectives: To determine the sensitivity and specificity of the PHQ-2 in detecting major depressive disorders (MDD) among adult ED patients presenting to an urban teaching hospital. We hypothesize that the PHQ-2 is a rapid, effective screening tool for depression in a general ED population. Methods: Cross sectional survey of a convenience sample of 1012 adult, non-critically ill, English speaking patients with medical and not psychiatric complaints presenting to the ED between 9am and 11pm weekdays. Patients were screened for MDD with the PHQ-9. We used SPSS v19.0 to analyze the specificity, sensitivity, positive predictive value (PPV), negative predictive value (NPV), and kappa of PHQ-2 scores of 2 and 3 (out of possible total score of 6) compared to a validated cut-off score of 10 or higher of 27 points on the PHQ-9. The two questions on the PHQ-2 are: ''Over the last two weeks, how often have you had little interest in doing things? How often have you felt down, depressed or hopeless?'' Responses are scored from 0-3 based on ''never'',''several days'', ''more than half'', ''nearly every day''. Results: 1012 subjects of 1116 approached agreed to participate (90.7% response rate), and 975 (96.3%) completed the PHQ-9. The PHQ-9 identified 225 (23.1%) subjects with MDD. Table 1 outlines the percent of subjects who were positive and the sensitivity, specificity, positive, and negative predictive values and kappa for each cut-off on the PHQ-2. Conclusion: The PHQ-2 is a sensitive and specific screening tool for MDD in the ED setting. Moreover, the PHQ-2 is closely correlated with the PHQ-9, especially if a score of 3 or greater is used. Given the simplicity and ease of using a two-item questionnaire and the high rates of undiagnosed depression in the ED, including this brief, self-administered screening tool to ED patients may allow for early awareness of possible MDD and appropriate evaluation and referral. patients. However, much of this self-harm behavior is not discovered clinically and very little is known about the prevalence and predictors of current ED screening practices. Attention to this issue is increasing due to the Joint Commission's Patient Safety Goal 15, which focuses on identification of suicide risk in patients. Objectives: To describe the prevalence and predictors of screening for self-harm and of presence of current self-harm in EDs. Methods: Data were obtained from the NIMH-funded Emergency Department Safety Assessment and Followup Evaluation (ED-SAFE). Eight U.S. EDs reviewed charts in real time for 35-40 hours a week between 8/ 2010 and 11/2011. All patients presenting during enrollment shifts were characterized as to whether a selfharm screening had been performed by ED clinicians. A subset of patients with a positive screening was asked about the presence of self-harm ideation, attempts, or both by trained research staff. We used multivariable logistic regression to identify predictors of screening and of current self-harm. Data were clustered by site. In each model we examined day and time of presentation, age < 65 years, sex, race, and ethnicity. Results: Of the 92,154 patients presenting during research shift, 24,240 (26%) were screened for self-harm. Screening rates varied among sites and ranged from 4% to 32%, with one outlier at 93%. Of those screened, 2,471 (10%) had current self-harm. Among those with selfharm approached by study personnel (n = 1,037), 916 (88%) had thoughts of self-harm (suicidal or non-suicidal), 806 (78%) had thoughts of suicide, 444 (43%) had self-harm behavior, and 316 (31%) had suicide attempt(s) over the preceding week. Predictors of being screened were: age < 65 years, male sex, weekend presentation, and night shift presentation (Table) . Among those screened, predictors of current self-harm were: age < 65 years, white race, and night shift presentation. Conclusion: Screening for self-harm is uncommon in ED settings, though practices vary dramatically by site. Patients presenting at night and on weekends are more likely to be screened, as are those under age 65 and males. Current self-harm is more common among those presenting on night shift, those under age 65, and whites. Results: There were 1328 out-of-hospital records reviewed, and hospital discharge data were available in 1120 non-cardiac arrest patients. Of the 1120 patients, 1084 (96.8%) patients survived to hospital discharge and 36 (3.2%) died during hospitalization. The mean age of those transported was 54 years (SD20), 612 (55%) were male, 128 (11%) were trauma-related, and 112 (10%) were admitted to the ICU. Average systolic blood pressure (SBP), pulse (P), respiratory rate (RR), oxygen saturation (O 2 sat), and end-tidal carbon dioxide (ETCO 2 ) were SBP = 141 (SD29), P = 95 (SD25), RR = 24 (SD9), O 2 sat = 95% (SD8), and ETCO 2 = 34 (SD10 Conclusion: Of all the initial vital signs recorded in the out-of-hospital setting, ETCO 2 was the most predictive of mortality. These findings suggest that pre-hospital ETCO 2 is a useful clinical tool for determining severity of illness and appropriate triage. Background: The prehospital use of continuous positive airway pressure (CPAP) ventilation is a relatively new management for acute cardiogenic pulmonary edema (ACPE) and there is little high quality evidence on the benefits or potential dangers in this setting. Objectives: The aim of this study was to determine whether patients in severe respiratory distress treated with CPAP in the prehospital setting have a lower mortality than those treated with usual care. Methods: Randomized, controlled trial comparing usual care versus CPAP (WhisperflowÒ) in a prehospital setting, for adults experiencing severe respiratory distress, with falling respiratory efforts, due to a presumed ACPE. Patients were randomised to receive either usual care, including conventional medications (nitrates, furosemide, and oxygen) plus bag-valve-mask ventilation, versus conventional medications plus CPAP. The primary outcome was prehospital or in-hospital mortality. Secondary outcomes were need for tracheal intubation, length of hospital stay, change in vital signs, and arterial blood gas results. We calculated relative risk with 95% CIs. Results: Fifty patients were enrolled with mean age 79AE8 (SD 11AE9), male 56AE0%, mortality 20AE0%. The risk of death was significantly reduced in the CPAP arm with mortality 34AE6% (9 deaths) in the usual care arm compared to 4AE2% (1 death) in the CPAP arm (RR, 0AE12; 95% CI 0AE02 to 0AE88; p = 0AE04). Patients who received CPAP were significantly less likely to have respiratory acidosis (mean difference in pH 0AE09; 95% CI 0AE01 to 0AE16; p = 0AE02; n = 24) than patients receiving usual care. The length of hospital stay was significantly less in the patients who received CPAP (mean difference 2AE3 days; 95% CI )0AE01 to 4AE6, p = 0AE05). Conclusion: We found that CPAP significantly reduced mortality, respiratory acidosis, and length of hospital stay for patients in severe respiratory distress caused by ACPE. This study shows the use of CPAP for ACPE improves patient outcomes in the prehospital setting. (Originally submitted as a ''late-breaker.'') Trial reg. ANZCTR ACTRN12609000410257; funding Fisher and Paykal suppliers of the WhisperflowÒ CPAP device. Background: Because emergency service utilization continues to climb, validated methods to safely identify and triage low-acuity patients to either alternate care destinations or a complaint-appropriate level of EMS response is of keen interest to EMS systems and potentially payers. Though the literature generally supports the Medical Priority Dispatch System (MPDS) as a tool to predict low-acuity patients by various standards, correlation with initial patient physiologic data and patient age is novel. Objectives: To determine whether the six MPDS priority determinants for Protocol 26 (sick person) can be used to predict initial EMS patient acuity assessment or severity of an aggregate physiologic score. Our longterm goal is to determine whether MPDS priority can be used to predict patient acuity and potentially send only a first responder to do an in-person assessment to confirm this acuity, while reserving ALS transport resources for higher acuity patients. Methods: Calls dispatched through the Wichita-Sedgwick County 9-1-1 center between July 20, 2009 and October 1, 2011 using MPDS Protocol 26 (sick person) were linked to the EMS patient care record for all patients 14 and older. The six MPDS priority determinants were evaluated for correlation with initial EMS acuity code, initial vital signs, Rapid Acute Physiology Score (RAPS), or patient age. The EMS acuity code scores patients from low to severe acuity, based on initial EMS assessment. Results: There were 9370 calls dispatched using Protocol 26 for those 14 years of age and older during the period, representing approximately 13% of all EMS calls. There is a significant difference in the first encounter vital signs among different MPDS priority levels. Based on the logistic regression model, the MPDS priority code alone had a sensitivity of 68% and specificity of 55% for identifying low-acuity patients with EMS Acuity Score as the standard. The area under the curve (AUC) for ROC is 0.62 for MPDS priority codes alone, while addition of age increases this value to 0.69. If we use the RAPS score as the standard to the MPDS priority code, AUC is 0.528. If we include both MPDS and age in the model, the AUC is 0.533. Conclusion: In our system, MPDS priority codes on Protocol 26 (sick person) alone, or with age or RAPS score, are not useful either as predictors of patient acuity on EMS arrival or to reconfigure system response or patient destination protocols. Alternate Ambulance Destination Program C. Nee-Kofi Mould-Millman 1 , Tim McMahan 2 , Michael Colman 2 , Leon H. Haley 1 , Arthur H. Yancey 1 1 Emory University, Atlanta, GA; 2 Grady EMS, Atlanta, GA Background: Low-acuity patients calling 9-1-1 are known to utilize a large proportion of EMS and ED resources. The National Association of EMS Physicians and ACEP jointly support EMS alternate destination programs (ADPs) in which low-acuity patients are allocated alternative resources non-emergently. Analysis of one year's ADP data from our EMS system revealed that only 4.5% of eligible patients were transported to alternate destinations (ambulatory clinics). Reasons for this low success rate need investigation. Objectives: To survey EMTs and discover the most frequent reasons given by them for transportation of eligible patients to EDs instead of to clinics. Methods: This study was conducted within a large, urban, hospital-based EMS system. Upon conducting an ADP for 12 months, a paper-based survey was created and pre-tested. All medics with any ADP-eligible patient contact were included. EMTs were asked about personal, patient, and system related factors contributing to ED transport during the last 3 months of the ADP. Qualitative data were coded, collated, and descriptively reported. Results: Sixty-three respondents (26 EMT-Intermediates and 37 EMT-Paramedics) completed the survey, representing 79% of eligible EMTs. Thirty-one EMTs (49%) responded that they did not attempt to recruit eligible patients into the ADP in the last 3 program months. Of those EMTs, 25 (81%) attributed their motive to multiple, prior, failed recruitment attempts. The 32 EMTs who actively recruited ADP patients were asked reasons given by patients for clinic transport refusals: 19 (60%) cited that patients reported no prior experience of care at the participating clinics, and 23 (72%) reported patients had a strong preference for care in an ED. Regarding system-related factors contributing to non-clinic transport, 24 of the 32 EMTs (75%) reported that clinic-consenting patients were denied clinic visits, mostly because of non-availability of same-day clinic appointments. Conclusion: Respondents indicated that poor EMT enrollment of eligible patients, lack of available clinic time slots, and patient preference for ED care were among the most frequent reasons contributing to the low success rate of the ADP. This information can be used to enhance the success of this, and potentially other ADP programs, through modifications to ADP operations and improved patient education. The Effect of a Standardized Offline Pain Treatment Protocol in the Prehospital Setting on Pediatric Pain Treatment Brent Kaziny 1 , Maija Holsti 1 , Nanette Dudley 1 , Peter Taillac 1 , Hsin-yi Weng 1 , Kathleen Adelgais 2 1 University of Utah, School of Medicine, Salt Lake City, UT; 2 University of Colorado, School of Medicine, Aurora, CO Background: Pain is often under treated in children. Barriers include need for IV access, fear of delayed transport, and possible complications. Protocols to treat pain in the prehospital setting improve rates of pain treatment in adults. The Utah EMS for Children (EMSC) Program developed offline pediatric protocol guidelines for EMS providers, including one protocol that allows intranasal analgesia delivery to children in the prehospital setting. Objectives: To compare the proportion of pediatric patients receiving analgesia for orthopedic injury by prehospital providers before and after implementation of an offline pediatric pain treatment protocol. Methods: We conducted a retrospective study of patients entered into the Utah Prehospital On-Line Active Reporting Information System (POLARIS, a database of statewide EMS cases) both before and after initiation of the pain protocol. Patients were included if they were age 3-17 years, with a GCS of 14-15, an isolated extremity injury, and were transported by an EMS agency that had adopted the protocol. Pain treatment was compared for 2 years before and 18 months after protocol implementation with a wash-out period of 12 months for agency training. The difference in treatment proportions between the two groups was analyzed and 95% CIs were calculated. Results: During the two study periods, 1155 patients met inclusion criteria. Patient demographics are outlined in the table. 93/501 (18.6%) patients were treated for pain before compared to 174/654 (26.6%) patients treated after the pain protocol was implemented; a difference of 8.0% (95% CI: 3.2%-12.8%). Patients were more likely to receive pain medication if they had a pain score documented (OR: 1.16; 95% CI: 1.09-1.22) and if they were treated after the implementation of a pain protocol (OR: 1.27; 95% CI: 1.00-1.62). Factors not associated with the treatment of pain include age, sex, and mechanism of injury. Conclusion: The creation and adoption of statewide EMSC pediatric offline protocol guideline for pain management is associated with a significant increase in use of analgesia for pediatric patients in the prehospital setting. Background: Evidence-based guidelines are needed to determine the appropriate use of air medical transport, as few criteria currently used predict the need for air transport to a trauma center. We previously developed a clinical decision rule (CDR) to predict mortality in injured, helicopter-transported patients. Objectives: This study is a prospective validation of the CDR in a new population. Methods: A prospective, observational cohort analysis of injured patients ( ‡16 y.o.) transported by helicopter from the scene to one of two Level I trauma centers. Variables analyzed included patient demographics, diagnoses, and clinical outcomes (in-hospital mortality, emergent surgery w/in 24 hrs, blood transfusion w/in 24 hrs, ICU admit greater than 24 hrs, combined outcome of all). Prehospital variables were prospectively obtained from air medical providers at the time of transport and included past medical history, mechanism of injury, and clinical factors. Descriptive statistics compared those with and without the outcomes of interest. The previous CDR (age ‡ 45, GCS £ 13, SBP < 90, flail chest) was prospectively applied to the new population to determine its accuracy and discriminatory ability. Results: 416 patients were transported from October 2010-August 2011. The majority of patients were male (59%), white (79%), with an injury occurring in a rural location (60%). Most injuries were blunt (95%) with a median ISS of 9. Overall mortality was 5%. The most common reasons for air transport were: MVC with high risk mechanism (17%), GCS £ 13 (16%), LOC >5 minutes (16%), and MVC >20 MPH (14%). Of these, only GCS £ 13 was significantly associated with any of the clinical outcomes. When applying the CDR, the model had a sensitivity of 100% (81.2%-100%), a specificity of 51.2% (50.6%-51.6%), a NPV of 100% (98.1%-100%), and a PPV of 9.9% (8.0%-9.9%) for mortality. The area under the curve for this model was 0.92, suggesting excellent discriminatory ability. Conclusion: The air transport decision rule in this study performed with high sensitivity and acceptable specificity in this validation cohort. Further external validation in other systems and with ground transported patients are needed in order to improve decision making for the use of helicopter transport of injured patients. Background: Acute non-variceal upper gastrointestinal (GI) bleeding is a common indication for hospital admission. To appropriately risk-stratify such patients, endoscopy is recommended within 24 hours. Given the possibility to safely manage patients as outpatients after endoscopy, risk stratification as part of an emergency department (ED) observation unit (OU) protocol is proposed. Objectives: Our objective was to determine the ability of an OU upper GI bleeding protocol to identify a lowrisk population, and to expeditiously obtain endoscopy and disposition patients. We also identified rates of outcomes including changes in hemoglobin, abnormal endoscopy findings, admission, and revisits. Background: Acute uncomplicated pyelonephritis (pyelo) requires no imaging but a CT flank pain protocol (CTFPP) may be ordered to determine if patients with pyelo and flank pain also have an obstructing stone. The prevalence of kidney stone and the characteristics predictive of kidney stone in pyelo patients is unknown. Objectives: To determine elements on presentation that predict ureteral stone, as well as prevalence of stone and interventions in patients undergoing CT for pyelo. Methods: Retrospective study of patients at an academic ED who received a CTFPP scan between 8/05 and 4/09. 5497 CTFPPs were identified and 1899 randomly selected for review. Pyelo was defined as: positive urine dip for infection and >5 WBC/HPF on formal urinalysis in addition to flank pain/CVA tenderness, chills, fever, nausea, or vomiting. Patients were excluded for age < 18 y.o., renal disease, pregnancy, urological anomaly, or recent trauma. Clinical data (178 elements) were gathered blinded to CT findings; CT results were abstracted separately and blinded to clinical elements. CT findings of hydronephrosis and hyrdroureter (hydro) were used as a proxy for hydro that could be determined by ultrasound prior to CT. Patients were categorized into three groups: ureteral stone, no significant findings, and intervention or follow-up required. Classification and Regression Tree analysis was used to determine which variables could identify ureteral stone in this population of pyelo patients. Results: Out of the 1899 patients, 105 (7.0%) met criteria for pyelo; subjects had a mean age of 39 ± 15.9 and 82% (n = 87) were female. CT revealed 31 (29%, 95% CI = 0.22-0.39) symptomatic stones, and 72 (68%, 95% CI = 0.59-0.76) exams with no significant findings. Two patients needed intervention/ follow-up (1%, 95% CI = 0.0052-0.0667), one for perinephric hemorrhage and the other for pancreatitis. Hydro was predictive for ureteral stone with an OR = 18.4 (95% CI = 6.4-52, p < 0.0001). Eleven (35%) ureteral stone patients were admitted and 9 (8%) of them had procedures. Of these patients, 100% had CT signs of obstruction, 8 (88%) had hydronephrosis, and 1 (11%) had hydroureter. Conclusion: Hydronephrosis was predictive of ureteral stone and in-house procedures. Prospective study is needed to determine whether CT scan is warranted in patients with pyelonephritis but without hydronephrosis or hydroureter. Curative Objectives: The specific aim of this analysis was to describe characteristics of patients presenting to the emergency department (ED) at their index diagnosis, and to determine whether emergency presentation precludes treatment with curative intent. Methods: We performed a retrospective cohort analysis on a prospectively maintained institutional tumor registry to identify patients diagnosed with CRC from 2008-2010. EMRs were reviewed to identify which patients presented to the ED with acute symptoms of CRC as the initial sign of their illness. The primary outcome variable was treatment plan (curative vs. palliative). Secondary outcome variables included demographics, tumor type and location. Descriptive statistics were conducted for major variables. Chi-squre and Fisher's exact tests were used to detect the association between categorical variables. Two-sample t-test was used to identify the association between continuous and categorical variables. Results: Between Jan 1 2008 and Dec 31 2010, 376 patients were identified at our institution with CRC. 214 (57%) were male and 162 (43%) were female, with mean age 60.6; SD: 13.3. Thirty-three patients (8.8%) initially presented to the ED, of whom 5 (15.5%) received palliation. Of 339 patients who initially presented elsewhere, 69 (20.5%) received palliation. Acute ED presentation with CRC symptoms did not preclude treatment with curative intent (p = 0.47). Patients who presented emergently were more likely to be female (64% vs male 41%; p = 0.01) and older (65 vs. 60; p = 0.02). There was no statistically significant relationship between age, sex, tumor location, or type and treatment approach. Conclusion: Patients with CRC may present to the ED with acute symptoms, which ultimately leads to the diagnosis. Emergent presentation of CRC does not preclude patients from receiving therapy with curative intent. Cannabinoid (OR 2.93, , and white blood cell (WBC) count ‡14,000/mm 3 (OR 11.35, 95% CI 3.42-37.72). Conclusion: Age ‡65 years is not associated with need for admission from an ED observation unit. Older adults can successfully be cared for in these units. Initial temperature, respiratory rate, and pulse were not predictive of admission, but extremely elevated blood pressure was predictive. Other relevant predictor variables included comorbidities and elevated WBC count. Advanced age should not be a disqualifying criterion for disposition to an ED observation unit. Older Adult Fallers in the Emergency Department Luna Ragsdale, Cathleen Colon-Emeric Duke University, Durham, NC Background: Approximately 1/3 of community-dwelling older adults experience a fall each year, and 2.2 million are treated in U.S. emergency departments (ED) annually. The ED offers a potential location for identification of high-risk individuals and initiation of fall-prevention services that may decrease both fall rates and resource utilization. Objectives: The goal of this study was to: 1) validate an approach to identifying older adults presenting with falls to the ED using administrative data; and 2) characterize the older adult who falls and presents to the ED and determine the rate of repeat ED visits, both fall-related and all visits, after an index fall-related visit. Methods: We identified all older adults presenting to either of the two hospitals serving Durham County residents during a six month period. Manual chart review was completed for all encounters with ICD9 codes that may be fall-related. Charts were reviewed 12 months prior and 12 months post index visit. Descriptive statistics were used to describe the cohort. Results: A total of 4452 older adults were evaluated in the ED during this time period; 1714 (55.7%) had an ICD9 code for a potentially fall-related injury. Of these, record review identified 534 (12%) with a fall from standing height or less. Of the fallers, 65.9% of the patients were discharged, 31% were admitted, and 3% were admitted under observation. Of those who fell, 38.2% had an ED visit within the previous year. Approximately 1/3 (33.3%) of these were fall related. Over half (53.4%) of the patients who fell returned to the ED within one year of their index visit. A large proportion (44.4%) of the return visits was fall-related. Follow-up with a primary care provider or specialist was recommended in 46% of the patients who were discharged. Overall mortality rate for fallers over the year following the index visit was 18%. Conclusion: Greater than fifty percent of fallers will return to the ED after an index fall, with a large proportion of the visits related to a fall. A large number of these fallers are discharged home with less than fifty percent having recommended follow-up. The ED represents an important location to identify high-risk older adults to prevent subsequent injuries and resource utilization. Objectives: We studied whether falls from a standing position resulted in an increased risk for intracranial or cervical injury verses falling from a seated or lying position. Methods: This is a prospective observational study of patients over the age of 65 who presented with a chief complaint of fall to a tertiary care teaching facility. Patients were eligible for the study if they were over age 65, were considered to be at baseline mental status, and were not triaged to the trauma bay. At presentation, a questionnaire was filled out by the treating physician regarding mechanism and position of fall, with responses chosen from a closed list of possibilities. Radiographic imaging was obtained at the discretion of the treating physician. Charts of enrolled patients were subsequently reviewed to determine imaging results, repeat studies done, or recurrent visits. All patients were called in follow-up at 30 days to assess for delayed complications related to the fall. Data were entered into a standardized collection sheet by trained abstractors. Data were analyzed with Fisher's Exact test and descriptive statistics. This study was reviewed and approved by the institutional review board. Results: Two-hundred sixty two patients were enrolled during the study period. One-hundred ninety eight of these had fallen from standing and 64 fell from either sitting or lying positions. The mean age for patients was 84 (SD 7.9) for those who fell from standing and 84 (SD 8.4) for those who fell from sitting or lying. There were 6 patients with injuries who fell from standing: three with subdural hematomas, one with a cerebral contusion, one with an osteophyte fracture at C6, and one with an occipital condyle fracture with a chip fracture of C1. There were 2 patients with injuries who fell from a seated or lying position: one with a traumatic subarachnoid hemorrhage and one with a type II dens fracture. The overall rate of traumatic intracranial or cervical injury in elders who fell was 3%. No patients required surgical intervention. There was no difference in rate of injury between elders who fell from standing versus those who fell from sitting or lying (p = 1). (table) . Conclusion: Both instruments identify the majority of patients as high-risk which will not be helpful in allocating scarce resources. Neither the ISAR nor the TRST can distinguish geriatric ED patients at high or low risk for 1or 3-month adverse outcomes. These prognostic instruments are not more accurate in dementia or lower literacy subsets. Future instruments will need to incorporate different domains related to short-term adverse outcomes. Background: For older adults, both inpatient and outpatient care involves not only the patient and physician, but often a family member or informal caregiver. They can assist in medical decision making and in performing the patient's activities of daily living. To date, multiple outpatient studies have examined the positive roles family members play during the physician visit. However, there is very limited information on the involvement of the caregiver in the ED and their relationship with the health outcomes of the patient. Objectives: To assess whether the presence of a caregiver influences the overall satisfaction, disposition, and outpatient follow-up of elderly patients. We performed a three-step inquiry of patients over 65 years old who arrived to the UPenn ED. Patients and care partners were initially given a questionnaire to understand basic demographic data. At the end of the ED stay, patients were given a satisfaction survey and followed through 30 days to assess time to disposition, whether the patient was admitted or discharged, outpatient follow-up, and ED revisit rates. Chi-square and t-tests were used to examine the strength of differences in the elderly patients' sociodemographics, self-rated health, receiving aid with their Instrumental Activities of Daily Living, and number of health problems by accompaniment status. Multivariate regression models were constructed to examine whether the presence or absence of caregivers affected satisfaction, disposition, and follow-up. Results: Overall satisfaction was higher among patients who had caregivers (2.4 points), among patients who felt they were respected by their physician (3.8 points), and had lower lengths of stay (2 hours). Patients with caregivers were also more likely to be discharged home (OR 2.4) and to follow-up with their regular physician (OR 2.1). There was no evidence to suggest caregivers affected the overall rates of revisits back to an ED. Conclusion: For older adults, medical care involves not only the patient and physician, but often a family member or an informal care companion. These results demonstrate the positive influence of caregivers on the patients they accompany, and emergency physicians should define ways to engage these caregivers during their ED stay. This will also allow caregivers to participate when needed and can help to facilitate transitions across care settings. Background: Shared decision making has been shown to improve patient satisfaction and clinical outcomes for chronic disease management. Given the presence of individual variations in the effectiveness and side effects of commonly used analgesics in older adults, shared decision making might also improve clinical outcomes in this setting. Objectives: We sought to characterize shared decision making regarding the selection of an outpatient analgesic for older ED patients with acute musculoskeletal pain and to examine associations with outcomes. Methods: We conducted a prospective observational study with consecutive enrollment of patients age 65 or older discharged from the ED following evaluation for moderate or severe musculoskeletal pain. Two essential components of shared decision making, 1) information provided to the patient and 2) patient participation in the decision, were assessed via patient interview at one week using four-level Likert scales. Results: Of 233 eligible patients, 110 were reached by phone and 87 completed the survey. Only 25% (21/87) of patients reported receiving 'a lot' of information about the analgesic, and only 21% (18/87) reported participating 'a lot' in the selection of the analgesic. There were trends towards white patients (p = 0.06) and patients with higher educational attainment (p = 0.07) reporting more participation in the decision. After adjusting for sex, race, education, and initial pain severity, patients who reported receiving 'a lot' of information were more likely to report optimal satisfaction with the analgesic than those receiving less information (78% vs. 47%, p < 0.05). After the same adjustments, patients who reported participating 'a lot' in the decision were also more likely to report optimal satisfaction with the analgesic (82% vs. 47%, p < 0.05) and greater reductions in pain scores (mean reduction in pain 4.6 vs. 2.7, p < 0.05) at one week than those who participated less. Background: Quality of life (QOL) measurements have become increasingly important in outcomes-based research and cost-utility analyses. Dementia is a prevalent, often unrecognized, geriatric syndrome that may limit the accuracy of patient self-report in a subset of patients. The relationship between caregiver and geriatric patient QOL in the emergency department (ED) is not well understood. Objectives: To qualify the relationship between caregiver and geriatric patient QOL ratings in ED patients with and without cognitive dysfunction. Methods: This was a prospective, consecutive patient, cross-sectional study over two months at one urban academic medical center. Trained research assistants screened for cognitive dysfunction using the Short Blessed Test and evaluated health impairment using the Quality of Life-Alzheimer's Disease (QOL-AD) Test. When available in the ED, caregivers were asked to independently complete the QOL-AD. Consenting subjects were non-critically ill, English-speaking, community-dwelling adults over 65 years of age. Responses were compared using Wilcoxon Signed Ranks test to assess the relationships between patient and caregiver scores from the QOL-AD stratified by normal or abnormal cognitive screening results. Significance was defined by p < 0.05. Results: Patient QOL ratings were obtained from 108 patient-caregiver pairs. Patients were 51% female, 52% African-American, with a mean age of 76-years, and 58% had abnormal cognitive screening tests. Compared with caregivers, cognitively normal patients had no significant QOL assessment differences except for questions of energy level and overall mood. On the other hand, cognitively impaired patients differed significantly on questions of energy level and ability to perform household chores with a trend towards significant differences for living setting (p = 0.097) and financial situation (p = 0.057). In each category, the differences reflected a caregiver underestimation of quality compared with the patient's self-rating. Conclusion: Discrepancies between QOL domains and total scores for patients with cognitive dysfunction and their caregivers highlights the importance of identifying cognitive dysfunction in ED-based outcomes research and cost-utility analyses. Further research is needed to quantify the clinical importance of the patient-and caregiver-assessed quality of life. Background: Age is often a predictor for increased morbidity and mortality. However, it is unclear whether old age is a predictor of adverse outcome in syncope. Objectives: To determine whether old age is an independent predictor of adverse outcome in patients presenting to the emergency department following a syncopal episode. Methods: A prospective observational study was conducted from June 2003 to July 2006 enrolling consecutive adult ED patients (>18 years) presenting with syncope. Syncope was defined as an episode of transient loss of consciousness. Adverse outcome or critical intervention were defined as gastrointestinal bleeding or other hemorrhage, myocardial infarction/percutaneous coronary intervention, dysrhythmia, alteration in antidysrhythmics, pacemaker/defibrillator placement, sepsis, stroke, death, pulmonary embolus, or carotid stenosis. Outcomes were identified by chart review and 30-day follow-up phone calls. Results: Of 575 patients who met inclusion criteria, an adverse event occurred in 24% of patients. Overall, 35% of patients with risk factors had adverse outcomes compared to 1.6% of patients with no risk factors. In particular, 28/127 (22%; 95% CI 16-30%) of patients <65 with risk factors had adverse outcomes, while 85/196 (43%; 95% CI 36-50%) of the elderly with risk factors had adverse outcomes. In contrast, among young people 2/196 (1%; 95% CI 0.04-3.8%) of patients without risk factors had adverse outcomes while 2/56 (3.6%; 95% CI 0.28-13%) of patients ‡65 without risk factors had adverse outcomes. Conclusion: Although the elderly are at greater risk for adverse outcomes in syncope, age ‡ 65 or older alone does not appear to be a predictor of adverse outcome following a syncopal event. Based on these data, it should be safe to discharge home from the ED patients with syncope, but without risk factors, regardless of age. (Originally submitted as a ''late-breaker.'') Antibiotics Background: Adherence to national guidelines for HIV and syphilis screening in EDs is not routine. In our ED, HIV and syphilis screening rates among patients tested for gonorrhea and chlamydia (GC/CT) have been reported to be 45% and 30%, respectively. Objectives: To determine the effect of a sexually transmitted infection (STI) laboratory order set on HIV and syphilis screening among ED patients tested for GC/CT. We hypothesized that a STI order set would increase screening rates by at least 30%. Methods: A 6-month, quasi-experimental study in an urban ED comparing HIV and syphilis screening rates of GC/CT-tested patients before (control phase) and after the implementation of a STI laboratory order set (intervention phase). The order set linked blood-based rapid HIV and syphilis screening with GC/CT testing. Consecutive patients completing GC/CT testing were included. The primary outcome was the absolute difference in HIV and syphilis screening rates among GC/ CT-tested patients between phases. We estimated that 550 subjects per phase were needed to provide 90% power (p-value of £0.05) to detect an absolute difference in screening rates of 10%, assuming a baseline HIV screening rate of 45%. Results: The ED census was 42,461. Characteristics of patients tested for GC/CT were similar between phases: the mean age was 33 years (SD = 12) and most were female (65%), black (49%), Hispanic (30%), and unmarried (84% Services have recommended the use of immunization programs against influenza disease within hospitals since the 1980s. The emergency department (ED) being the ''safety net'' for most non-insured people is an ideal setting to intervene and provide primary prevention from influenza. Objectives: The purpose of this study is to assess whether a pharmacist-based influenza immunization program is feasible in the ED, and successful in increasing the percentage of adult patients receiving the influenza vaccine. Methods: Implementation of pharmacist-based immunization program was developed in coordination with ED physicians and nursing staff in 2010. The nursing staff, using an embedded electronic questionnaire within their triage activity, screened patients for eligibility for the influenza vaccine. The pharmacist using an electronic alert system within the electronic medical record identified patients who we deemed eligible and if agreed the pharmacist vaccinated the patient. Patients who refused to be vaccinated were surveyed to ascertain their perception concerning immunization offered by a pharmacist in the ED. Feasibility and safety data for vaccinating patient in the ED were recorded. Results: 149 patients were approached and enrolled into the study. Of the 149, 41% agreed to receive the influenza vaccine from a pharmacist in the ED. The median screening time was 5 minutes and median vaccination time was 3 minutes for a total of 8 minutes from screening time to vaccination time. 74% were willing to receive the influenza vaccine from a pharmacist, and 78% were willing to receive the vaccine in the ED. The main reason given for refusing to receive the influenza vaccine was ''patient does not feel at risk of getting the disease''; only 14.6% stated they were vaccinated recently. Conclusion: A pharmacist-based influenza immunization program is feasible in the ED, and has the potential to successfully increase the percentage of adult patients receiving the vaccine. 1.4 ± 0.1, p < 0.05). ED visits by HIV-infected patients also had longer lengths of ED stay (317 ± 26.0 minutes vs. 222.5 ± 5.6 minutes, p < 0.05) and were more likely to be admitted (29% vs. 15%, p < 0.05), than their non-HIV infected counterparts. Conclusion: Although ED visits by HIV-infected individuals in the U.S. are relatively infrequent, they occur at rates higher than the general population, and consume significantly more ED resources than the general population. The Background: The influence of wound age on the risk of infection in simple lacerations repaired in the emergency department (ED) has not been well studied. It has traditionally been taught that there is a ''golden period'' beyond which lacerations are at higher risk of infection and therefore should not be closed primarily. The proposed cutoff for this golden period has been highly variable (3-24 hours in surgical textbooks). Objectives: To answer the following research question: are wounds closed via primary repair after the golden period at increased risk for infection? Methods: We searched MEDLINE, EMBASE, and other databases as well as bibliographies of relevant articles. We included studies that enrolled ED patients with lacerations repaired by primary closure. Exclusion: 1. intentional delayed primary repair or secondary closure, 2. wounds requiring intra-operative repair, skin graft, drains, or extensive debridement, and 3. grossly contaminated or infected at presentation. We compared the outcome of wound infection in two groups of early versus delayed presentations (based on the cut-offs selected by the original articles). We used ''Grading of Recommendations Assessment, Development and Evaluation'' (GRADE) criteria to assess the quality of the included trials. Frequencies are presented as percentages with 95% confidence intervals. Relative risk (RR) of infection is reported when clinically significant. Results: 418 studies were identified. Four trials enrolling 3724 patients in aggregate met our inclusion/exclusion criteria. Two studies used a 6-hour cut-off and the other two used a 12-hour cut-off for defining delayed wounds. The overall quality of evidence was low. The infection rate in the wounds that presented with delay ranged from 1.4% to 32%. One study with the smallest sample size (Morgan et al), which only enrolled lacerations to the hand and forearm, showed higher rates of infection in patients with delayed wounds (table). The infection rates in delayed wound groups in the remaining three studies were not significantly different from the early wounds. Conclusion: The evidence does not support the existence of a golden period, nor does it support the role of wound age on infection rate in simple lacerations. Background: Although clinical studies in children have shown that temperature elevation is an independent and significant predictor of bacteremia in children, the relationship in adults is largely unknown or equivocal. Objectives: Review the incidence of positive blood cultures on critically ill adult septic patients presenting to an emergency department (ED) and determine the association of initial temperature with bacteremia. Methods: July 2008 to July 2010 retrospective chart review on all patients admitted from the ED to an urban community hospital with sepsis and subsequently expiring within 4 days of admission. Fever was defined as a temperature ‡38°C. SIRS criteria were defined as: 1) temperature ‡38°C or £36°C, 2) heart rate ‡90 beats/ minute, 3) respiratory rate ‡20 or mechanical ventilation, 4) WBC ‡ 12,000/mm 3 or <4,000 or bands ‡10%. Objectives: We examined the utility of limited genetic sequencing of bacterial isolates using multilocus sequence typing (MLST) to discriminate between known pathogenic blood culture isolates of S. epidermidis and isolates recovered from skin. Methods: Ten blood culture isolates from patients meeting the Centers for Disease Control and Prevention (CDC) criteria for clinically significant S. epidermidis bacteremia and ten isolates from the skin of healthy volunteers were studied. MLST was performed by sequencing 400 bp regions of seven genes (arc, aroE, gtR, mutS, pyr, tpiA, and yqiL) . Genetic variability at these sites was compared to an international database (www.sepidermidis.mlst.net) and each strain was then categorized into a genotype on the basis of known genetic variation. The ability of the gene sequences to correctly classify strains was quantified using the support vector machine function in the statistical package R. 1,000 bootstrap resamples were performed to generate confidence bounds around the accuracy estimates. Results: Between-strain variability was considerable, with yqiL being most variable (6 alleles) and tpiA being least (1 allele). The mutS gene, responsible for DNA repair in S. epidermidis, showed almost complete separation between pathogenic and commensal strains. When the seven genes were used in a joint model, they correctly predicted bacterial strain type with 90% accuracy (IQR 85, 95%). Conclusion: Multilocus sequence typing shows excellent early promise as a means of distinguishing contaminant versus truly pathogenic isolates of S. epidermidis from clinical samples. Near-term future goals will involve developing more rapid means of sequencing and enrolling a larger cohort to verify assay performance. conference are presented by influenza scenario in Table 1 and Background: Antiviral medications are recommended for patients with influenza who are hospitalized or at high risk for complications. However, timely diagnosis of influenza in the ED remains challenging. Influenza rapid antigen tests have short turn-around times, making them potentially useful in the ED setting, but their sensitivities may be too low to assist with treatment decisions. Objectives: To evaluate the test characteristics of the BinaxNow Influenza A&B rapid antigen test (RAT) in ED patients. Methods: We prospectively enrolled a systematic sample of patients of all ages presenting to two EDs with acute respiratory symptoms or fever during three consecutive influenza seasons (2008) (2009) (2010) (2011) . Research personnel collected nasal and throat swabs, which were combined and tested for influenza with RT-PCR using CDC-provided primers and probes. ED clinicians independently decided whether to obtain a RAT during clinical care. RATs were performed in the clinical laboratory using the BinaxNow Influenza A&B test on nasal swabs collected by ED staff. The study cohort included subjects who underwent both a research PCR and clinical RAT. RAT test characteristics were evaluated using PCR as the criterion standard with stratified sub-analyses for age group and influenza subtype (pandemic H1N1 (pH1N1), non-pandemic influenza A, influenza B). Results: 561 subjects were enrolled; 131 subjects were PCR positive for influenza (76 pH1N1, 20 non-pandemic influenza A, and 35 influenza B). For all age groups, RAT sensitivities were low and specificities were high ( HIV infection with CD4 < 200; and among nursing home residents, inability to independently perform activities of daily living. Sources for bacterial cultures included blood, sputum (adults only), bronchoalveolar lavage (BAL), tracheal aspirate, and pleural fluid. Only sputum specimens with a Bartlett score ‡1+ were considered adequate for culturing. Results: Among 461 children enrolled, 7 (2%) had S. aureus cultured from ‡1 specimen, including 5 with methicillin-resistant S. aureus (MRSA) and 2 with methicillin-susceptible S. aureus (MSSA). Specimens positive for S. aureus included 3 pleural fluid, 2 blood, 2 tracheal aspirates, and 1 BAL. Two children with S. aureus had evidence of co-infection: 1 influenza A, and 1 Streptococcus pneumoniae. Among 673 adults enrolled, 17 (3%) grew S. aureus from ‡1 specimen, including 9 with MRSA and 8 with MSSA. Specimens positive for S. aureus included 5 blood, 11 sputum, and 3 BAL. Five adults with S. aureus had evidence of co-infections: 2 coronavirus, 1 respiratory syncytial virus, 1 S. pneumoniae, and 1 Pseudomonas aeruginosa. Presenting clinical characteristics and outcomes of subjects with staphylococcal CAP are summarized in Tables 1-2. Conclusion: These preliminary findings suggest S. aureus is an uncommon cause of CAP. Although the small number of staphylococcal cases limits conclusions that can be drawn, in our analysis staphylococcal CAP appears to be associated with co-infections, pleural effusions, and severe disease. Future work will focus on continued enrollment and developing clinical prediction models to aid in diagnosing staphylococcal CAP in the ED. Background: Emergency care has been a neglected public health challenge in Sub-Saharan Africa. The goal of Global Emergency Care Collaborative (GECC) is to develop a sustainable model for emergency care delivery in low-resource settings. GECC is developing a training program for emergency care practitioners (ECPs). Objectives: To analyze the first 500 patient visits at Karoli Lwanga ''Nyakibale'' Hospital ED in rural Uganda to determine the knowledge and skills needed in training ECPs. Methods: A descriptive cross-sectional analysis of the first 500 consecutive patient visits in the ED's patient care log was reviewed by an unblinded abstractor. Data on demographics, procedures, laboratory testing, bedside ultrasounds (US) performed, radiographs (XRs) ordered, and diagnoses were collated. All authors discussed uncertainties and formed a consensus. Descriptive statistics were performed. Results: Of the first 500 patient visits, procedures were performed in 367 (73.4%) patients, including 244 (48.8%) who had IVs placed, 47 (9.4%) who received wound care, and 42 (8.4%) who received sutures. Complex procedures, such as procedural sedations, lumbar punctures, orthopedic reductions, nerve blocks, and tube thoracostomies, occurred in 49 (9.8%) patients. Laboratory testing, XRs, and USs were performed in 188,(37.6%), 99 (19.8%), and 45 (7%) patients, respectively. Infectious diseases were diagnosed in 217 (43.4%) patients; 78 (15.6 %) with malaria and 57 (11.4%) with pneumonia. Traumatic injuries were present in 140 (28%) patients; 77 (15.4%) needing wound care and 31 (6.2%) with fractures. Gastrointestinal and neurological diagnoses affected 58 (11.6%) and 27 (5.4%) patients, respectively. Conclusion: ECPs providing emergency care in Sub-Saharan Africa will be required to treat a wide variety of patient complaints and effectively use laboratory testing, XRs, and USs. This demands training in a broad range of clinical, diagnostic, and procedural skills, specifically in infectious disease and trauma, the two most prevalent conditions seen in this rural Sub-Saharan Africa ED. Assessment of Point-of-care Ultrasound in Tanzania Background: Current Chinese EMS is faced with many challenges due to a lack of systematic planning, national standards in training, and standardized protocols for prehospital patient evaluation and management. Objectives: To estimate the frequency with which prehospital care providers perform critical actions for selected chief complaints in a county-level EMS system in Hunan Province, China. Methods: In collaboration with Xiangya Hospital (XYH), Central South University in Hunan, China, we collected data pertaining to prehospital evaluation of patients on EMS dispatches from a ''1-2-0'' call center over a 2-month period. This call center services an area of just under 5000 km 2 with a total population of 1.36 million. Each EMS team consists of a driver, a nurse, and a physician. This was a cross-sectional study where a single trained observer accompanied EMS teams on transports of patients with a chief complaint of chest pain, dyspnea, trauma, or altered mental status. In this convenience sample, data were collected daily between 8 AM and 6 PM. Critical actions were pre-determined by a panel of emergency medicine faculty from XYH and the University of Maryland School of Medicine. Simple statistical analysis was performed to determine the frequency of critical actions performed by EMS providers. Results: During the study period, 1170 patients were transported, 452 of whom met the inclusion criteria. 218 (48.2%) evaluations were observed directly for critical actions. The table shows the frequency of critical actions performed by chief complaint. None of the patients with chest pain received an ECG even though the equipment was available. Rapid glucose was checked in only 2.1% of patients presenting with altered mental status. A lung exam was performed in 22.7% of patients with dyspnea, and the respiratory rate was measured in 9.1%. Among patients transported for trauma, blood pressure, and heart rate were only measured in 1% and 4.1%, respectively. Conclusion: In this observation study of prehospital patient assessments in a county-level EMS system, critical actions were performed infrequently for the chief complaints of interest. Performance frequencies for critical actions ranged from 0 to 22.7%, depending on the chief complaint. Standardized prehospital patient care protocols should be established in China and further training is needed to optimize patient assessment. Trends Little is known about the comparative effectiveness of noninvasive ventilation (NIV) versus invasive mechanical ventilation (IMV) in chronic obstructive pulmonary disease (COPD) patients with acute respiratory failure. Objectives: To characterize the use of NIV and IMV in COPD patients presenting to the emergency department (ED) with acute respiratory failure and to compare the effectiveness of NIV vs. IMV. Methods: We analyzed the 2006-2008 Nationwide Emergency Department Sample (NEDS), the largest, all-payer, US ED and inpatient database. ED visits for COPD with acute respiratory failure were identified with a combination of COPD exacerbation and respiratory failure ICD-9-CM codes. Patients were divided into three treatment groups: NIV use, IMV use, and combined use of NIV and IMV. The outcome measures were inpatient mortality, hospital length of stay (LOS), hospital charges, and complications. Propensity score analysis was performed using 42 patient and hospital characteristics and selected interaction terms. Results: There were an estimated 101,000 visits annually for COPD exacerbation and respiratory failure from approximately 4,700 EDs. Ninety-six percent were admitted to the hospital. Of these, NIV use increased slightly from 14% in 2006 to 16% in 2008 (P = 0.049), while IMV use decreased from 28% in 2006 to 19% in 2008 (P < 0.001); the combined use remained stable (4%). Inpatient mortality decreased from 10% in 2006 to 7% in 2008 (P < 0.001). NIV use varied widely between hospitals, ranging from 0% to 100% with median of 11%. In a propensity score analysis, NIV use (compared to IMV) significantly reduced inpatient mortality (risk ratio 0.57; 95% confidence interval [CI] 0.48-0.56), shortened hospital LOS (difference )3 days; 95%CI )4 to )3), and reduced hospital charges 044; 855) . NIV use was associated with a lower rate of iatrogenic pneumothorax compared with IMV use (0.04% vs. 0.6%, P < 0.001). An instrumental analysis confirmed the benefits of NIV use, with a 5% reduction in inpatient mortality in the NIV-preferring hospitals. Conclusion: NIV use is increasing in US hospitals for COPD with acute respiratory failure; however, its adoption remains low and varies widely between hospitals. NIV appears to be more effective and safer than IMV in the real-world setting. Background: Dyspnea is a common ED complaint with a broad differential diagnosis and disease-specific treatment. Bronchospasm alters capnographic waveforms, but the effect of other causes of dyspnea on waveform morphology is unclear. Objectives: We evaluated the utility of capnographic waveforms in distinguishing dyspnea caused by reactive airway disease (RAD) from non-RAD in adult ED patients. Methods: This was a prospective, observational, pilot study of a convenience sample of adult patients presenting to the ED with dyspnea. Waveforms, demographics, past medical history, and visit data were collected. Waveforms were independently interpreted by two blinded reviewers. When the interpreters disagreed, the waveform was re-reviewed by both reviewers and an agreement was reached. Treating physician diagnosis was considered the criterion standard. Descriptive statistics were used to characterize the study population. Diagnostic test characteristics and inter-rater reliability are given. Results: Fifty subjects were enrolled. Median age was 52 years (range 21-82), 50% were female, 34% were Caucasian. 29/50 (58%) had a history of asthma or chronic obstructive pulmonary disease. RAD was diagnosed by the treating physician in 19/50 (38%) and 32/50 (64%) had received treatment for dyspnea prior to waveform acquisition. The interpreters agreed on waveform analysis in 47/50 (94%) cases (kappa = 0.88). Test characteristics for presence of acute RAD, including 95%CI, were: overall accuracy 70% (55.2%-81.7%), sensitivity 69% (43.5%-86.4%), specificity 71% (51.8%-85.1%), positive predictive value 59% (36.7%-78.5%), negative predictive value 79% (58.5%-91.0%), positive likelihood ratio 2.25 (1.36-3.72) , negative likelihood ratio 0.42 (0.23-0.74). Conclusion: Inter-rater agreement is high for capnographic waveform interpretation, and shows promise for helping to distinguish between dyspnea caused by RAD and dyspnea from other causes in the ED. Treatments received prior to waveform acquisition may affect agreement between waveform interpretation and physician diagnosis, affecting the observed test characteristics. Asthma Background: Asthma and chronic obstructive pulmonary disease (COPD) patients who present to the emergency department (ED) usually lack adequate ambulatory disease control. While evidence-based care in the ED is now well defined, there is limited inform-ation regarding the pharmacologic or non-pharmacologic needs of these patients at discharge. Objectives: This study evaluated patients' needs with regard to the ambulatory management of their respiratory conditions after ED treatment and discharge. Methods: Over 6 months, 94 adult patients with acute asthma or COPD, presenting to a tertiary care Alberta Hospital ED and discharged after being treated for exacerbations, were enrolled. Using results from standardized in-person questionnaires, charts were reviewed by respiratory researchers to identify care gaps. Results: Overall, 58 asthmatic and 36 COPD patients were enrolled. More patients with asthma required education on spacer devices (52% vs 31%). Few asthma (9%) and no COPD patients had written action plans; asthma patients were more likely to need adherence counseling (53% vs 36%) for preventer medications. More patients with asthma required influenza vaccination (72% vs 39%; p = 0.003); pneumococcal immunization was low (36%) in COPD patients. Only 22% of asthmatics reported ever being referred to an asthma education program and 19% of the COPD patients reported ever being referred to pulmonary rehabilitation. At ED presentation, 28% of the asthmatics required the addition of inhaled corticosteroids (ICS) and 16% required the addition of ICS/long acting beta-agonist (ICS/LABA) combination agents. On the other hand, 36% of COPD patients required the addition of long-acting anticholinergics while most (83%) were receiving preventer medications. Finally, 31% of COPD and 29% of asthma patients who smoked required smoking cessation counseling. Conclusion: Overall, we identified various care gaps for patients presenting to the ED with asthma and COPD. There is an urgent need for high-quality research on interventions to reduce these gaps. Methods: This is an interim, sub-analysis of an interventional, double-blinded study performed in an academic urban-based adult ED. Subjects with acute exacerbation of asthma with FEV1 < 50% predicted within 30 minutes following initiation of ''standard care'' (including a minimum of 5 mg nebulized albuterol, 0.5 mg nebulized ipratropium, and 50 mg corticosteroid) who consented to be in a trial were included. All treatment was administered by emergency physicians unaware of the study objectives. Patients were randomly assigned to treatment with placebo or an intravenous beta agonist. All subjects had FEV1 and DS obtained at baseline, 1, 2, and 3 hours after treatment. FEV1 was measured using a bedside Nspire spirometer, and DS was calculated using a Modified Borg Dyspnea score. Results: Thirty-eight patients were included for analysis. Spearman's Rho test (Rho) was used to measure correlations between FEV1 and DS at 1, 2, and 3 hours post study entry and subsequent hospitalization. Rho is negative for FEV1 (higher FEV1 correlates to lower rate of hospitalization) and positive for DS (higher DS correlates to higher rate of hospitalization). At each time point, DS were more highly correlated to hospitalization than were FEV1 (see table) . Conclusion: Dyspnea score at 1, 2, and 3 hours were significantly correlated with hospital admission, whereas FEV1 was not. In this set of subjects with moderate to severe asthma exacerbations, a standardized subjective tool was superior to FEV1 for predicting subsequent hospitalization. Methods: This is an interim, subgroup analysis of a prospective, interventional, double-blind study performed in an academic urban ED. Subjects who were consented for this trial presented with acute asthma exacerbations with FEV1 £ 50% predicted within 30 minutes following initiation of ''standard care'' (includes a minimum of 2.5 mg nebulized albuterol, 0.5 mg nebulized ipratropium, and 50 mg of a corticosteroid). ED physicians who were unaware of the study objectives administered all treatments. Subjects were randomized in a 1:1 ratio to either placebo or investigational intravenous beta agonist arms. Blood was obtained at 1 and 1.25 hours after the start of the hour long infusion. Blood was centrifuged and serum stored at )80°C, and then shipped on dry ice for albuterol and lactate measurements at a central lab. The treatment lactate and D lactate were correlated with 1 hr serum albuterol concentrations and hospital admission using partial Pearson correlations to adjust for DS. Results: 38 subjects were enrolled to date, 20 with complete data. The mean baseline serum lactate level was 18.1 mg/dL (SD ± 8.6). This increased to 32.7 mg/ dL (SD ± 15.0) at 1.25 hrs. The mean 1 hr DS was 3.85 (SD ± 2.0). The correlations between treatment lactate, D lactate, 1 hr serum albuterol concentrations (R, S and total) and admission to hospital are shown (see table) . Both treatment and D lactate were highly conrrelated with total serum albuterol, R albuterol, and S albuterol. There was no correlation between treatment lactate or D lactate and hospital admission. Conclusion: Lactate and D lactate concentrations correlate with albuterol concentrations in patients presenting had asthma. Fifty one percent were <21 years old and 54% were female. We found a decline of 27% (95% CI: 23%-30%, p < 0.0001; R 2 = 0.73, p < 0.0001) in the overall yearly asthma visits to total ED visits from 1996 to 2010. When we analyzed sex and age groups separately, we found no statistically significant changes for females or for males <21 years old (R 2 £ 0.016, p ‡ 0.65). For females and males >21 years old, yearly asthma visits to total ED visits from 1996 to 2010 decreased 39% (95% CI: 33%-43%, p < 0.0001; R 2 = 0.90, p < 0.0001) and 20% (95% CI: 14%-26%, p < 0.0001; R 2 = 0.80, p < 0.0001), respectively. Conclusion: We found an overall decrease in yearly asthma visits to total ED visits from 1996 to 2010. We speculate that this decrease is due to greater corticosteroid use despite the increasing prevalence of asthma. It is unclear why this decrease was seen in adults and not in children and why it was greater for adult females than males. Objectives: Our objectives were to describe the use of a unique data collection system that leveraged EMR technology and to compare its data entry error rate to traditional paper data collection. Methods: This is a retrospective review of data collection methods during the first 12 months of a multicenter study of ED, anti-coagulated, head injury patients. On-shift ED physicians at five centers enrolled eligible patients and prospectively completed a data form. Enrolling ED physicians had the option of completing a one-page paper data form or an electronic ''dotphrase'' (DP) data form. Our hospital system uses an EpicÒbased EMR. A feature of this system is the ability to use DPs to assist in medical information entry. A DP is a preset template that may be inserted into the EMR when the physician types a period followed by a code phrase (in this case ''.ichstudy''). Once the study DP was inserted at the bottom of the electronic ED note, it prompted enrolling physicians to answer study questions. Investigators then extracted data directly from the EMR. Our primary outcomes of interest were the prevalence of DP data form use and rates of data entry errors. Results: From 7/2009 through 8/2010, 883 patients were enrolled. DP data forms were used in 288 (32.6%; 95% CI 29.5, 35.7%) cases and paper data forms in 595 (67.4%; 95% CI 64.3, 70.5%). The prevalence of DP data form use at the respective study centers was 11%, 16%, 18%, 31%, and 85%. Sixty-six (43.7 %; 95% CI 35.8, 51.6%) of 151 physicians enrolling patients used DP data entry at least once. Using multivariate analysis, we found no significant association between physician age, sex, or tenure and DP use. Data entry errors were more likely on paper forms (234/595, 39.3%; 95% CI 35.4, 43.3%) than DP data forms (19/288, 6.6%; 95% CI 3.7, 9.5%), difference in error rates 32.7% (95% CI 27.9, 37.6%, p < 0.001). Conclusion: DP data collection is a feasible means of study data collection. DP data forms maintain all study data within the secure EMR environment obviating the need to maintain and collect paper data forms. This innovation was embraced by many of our emergency physicians. We found lower data entry error rates with DP data forms compared to paper forms. Background: Inadequate randomization, allocation concealment, and blinding can inflate effect sizes in both human and animal studies. These methodological limitations might in part explain some of the discrepancy between promising results in animal models and non-significant results in human trials. Whereas blinding is not always possible, in clinical or animal studies, true randomization with allocation concealment is always possible, and may be as important in minimizing bias. Objectives: To determine the frequency with which published emergency medicine (EM) animal research studies report randomization, specific randomization methods, allocation concealment, and blinding of interventions and measurements, and to estimate whether these have changed over time. Methods: All EM animal research publications from 1/ 2000 through 12/2009 in Ann Emerg Med and Acad Emerg Med were reviewed by two trained investigators for a statement regarding randomization, and specific descriptions of randomization methods, allocation concealment, blinding of intervention, and blinding of measurements, when possible. Raw initial agreement was calculated and differences were settled by consensus. The first (period 1 = 2000-2004) and second (period 2 = 2005-2009) 5-year periods were compared with 95% confidence intervals. Results: Of 117 EM animal research studies, 109 were appropriate for review because they involved intervention in at least two groups. Blinding of interventions and measurements were not considered possible in 37% and 3%, respectively. Significant differences between period 1 and 2 were absent, although there was a trend towards less blinding of interventions and more blinding of measurements. Raw agreement was 91%. Conclusion: Although randomization is mentioned in the majority of studies, allocation concealment and blinding remain underutilized in EM animal research. We did not compare outcomes between blinded and non-blinded, randomized and non-randomized studies, because of small sample size. This review fails to demonstrate significant improvement over time in these methodological limitations in EM animal research publications. Journals might consider requiring authors to explicitly describe their randomization, allocation, and blinding methods. Background: Cluster randomized trials (CRTs) are increasingly utilized to evaluate quality improvement interventions aimed at health care providers. In trials testing ED interventions, migration of EPs between hospitals is an important concern, as contamination may affect both internal and external validity. Objectives: We hypothesized geographically isolating emergency departments would prevent migratory contamination in a CRT designed to increase ED delivery of tPA in stroke (The INSTINCT Trial). Methods: INSTINCT was a prospective, cluster-randomized, controlled trial. Twenty-four Michigan community hospitals were randomly selected in matched pairs for study. Following selection of a single hospital, all hospitals within 15 miles were excluded from the sample pool. Individual emergency physicians staffing each site were identified at baseline (2007) and 18 months later. Contamination was defined at the cluster level, with substantial contamination defined a priori as >10% of EPs affected. Non-adherence, total crossover (contamination + non-adherence), migration distance and characteristics were determined. Results: 307 emergency physicians were identified at all sites. Overall, 7 (2.3%) changed study sites. One moved between control sites, leaving 6 (2.0%) total crossovers. Of these, 2 (0.7%) moved from intervention to control (contamination) and 4 (1.3%) moved from control to intervention (non-adherence). Contamination was observed in 2 of 24 sites, with 17% and 9% contamination of the total site EP workforce at follow-up, respectively. Two of 6 crossovers occurred between hospitals within the same health system. Average migration distance was 42 miles for all EPs in the study and 35 miles for EPs moving from intervention to control sites. Conclusion: The mobile nature of emergency physicians should be considered in the design of quality improvement CRTs. Use of a 15-mile exclusion zone in hospital selection for this CRT was associated with very low levels of substantial cluster contamination (1 of 24) and total crossover. Assignment of hospitals from a single health system to a single study group and/or an exclusion zone of 45 miles would have further reduced crossovers. Increased reporting of contamination in cluster randomized controlled trials is encouraged to clarify thresholds and facilitate CRT design. Objectives: An extension of the LR, the average absolute likelihood ratio (AALR), was developed to assess the average change in the odds of disease that can be expected from a test, or series of tests, and an example of its use to diagnose wide QRS complex tachycardia (WCT) is provided. Methods: Results from two retrospective multicenter case series were used to assess the utility of QRS duration and axis to assess for ventricular tachycardia (VT) in patients with undifferentiated regular sustained WCT. Serial patients with heart rate (HR) >120 beats per minute and QRS duration >120 milliseconds (msec) were included. The final tachydysrhythmia diagnosis was determined by a number of methods independent of the ECG. The AALR is defined as: AALR = 1/N Total [R (N i *LR i ) (for LR > 1) + R (N k /LR k ) (for LR < 1)], where LR i and LR k are the interval LRs, and N i and N k are the number of patients with test results within the corresponding intervals. ROC curves were constructed, and interval LRs and AALRs were calculated for the QRS duration and axis tests individually, and when applied together. Confidence intervals were bootstrapped with 10,000 replications using the R boot package. Results: 187 patients were included: 95 with supraventricular tachycardia (SVT) and 92 with VT. Optimal QRS intervals (msec) for distinguishing VT from SVT were: QRS £ 130, 130 < QRS < 160, and QRS ‡ 160. QRS axis results were dichotomized to upward right axis (181-270 degrees) or not ()89 to 180 degrees). Results are listed in the table. Conclusion: Application of the QRS interval and axis tests together for patients with wide QRS complex tachycardia changes the odds of ventricular tachycardia, on average, by a factor of 3.5 (95% CI 2.4 to 6.2), and this is mildly improved over the QRS duration test alone. Both a strength and weakness of the AALR is its dependence on the pretest probability of disease. The AALR may be helpful for clinicians and researchers to evaluate and compare diagnostic testing approaches, particularly when strategies with serial non-independent tests are considered. consultation for adults with metastatic solid tumors at an urban, academic ED located within a tertiary care referral center. Field notes were grouped into barrier categories and then quantified when possible. Patient demographics for those who did and did not enroll were extracted from the medical record and quantified. Patients who did not meet inclusion criteria for the study (e.g., cognitive impairment) were excluded from the analysis. Results: Attempts were made to enroll 42 eligible patients in the study, and 23 were successfully enrolled (55% enrollment rate). Barriers to enrollment were deduced from the field notes and placed into the following categories from most to least common: patient refusal (6); diagnostic uncertainty regarding cancer stage (4); severity of symptoms preclude participation (4); patient unaware of illness or stage (3); and family refusal (2). Conclusion: Patients, families, and diagnostic uncertainty are barriers to enrolling ED patients with advanced illness in clinical trials. It is unclear whether these barriers are generalizable to other study sites and disease processes other than cancer. Objectives: The purpose of this study was to evaluate the use of a high-fidelity mannequin bedside simulation scenario followed by a debriefing session as a tool to improve medical student knowledge of palliative care techniques. Methods: Third year medical students participating in a 12-week simulation curriculum during a surgery/ emergency medicine/anesthesia clerkship were eligible for the study. All students were administered a pretest to evaluate their baseline knowledge of palliative care and randomized to a control or intervention group. During week 3 or 4, students in the intervention group participated in and observed two end-of-life scenarios. Following the scenarios, a faculty debriefer trained in palliative care addressed critical actions in each scenario. During week 10, all students received a posttest to evaluate for improvement in knowledge. The pre-test and post-test consisted of 12 questions addressing prognostication, symptom control, and the Medicare Hospice Benefit. Students were de-identified and pre-and post-tests were graded by a blinded scorer. Results: From Jan-Dec 2011, 70 students were included in the study and 5 were excluded due to incomplete data. The mean score on the pre-test for the intervention group was 3.16, and for the control group was 3.45 (p = 0.90 The results indicate that educators identify the most important scenarios as protocol-based simulations. Respondents also suggested that scenarios of very common emergency department presentations bear a great deal of importance. Emergency medicine educators assign priority to simulations involving professionalism and communication. Finally, many respondents noted that they use simulation to teach the presentation and management of rare or less frequent, but important disease processes. The identification of these scenarios would suggest that educators find simulation useful for filling in ''gaps'' in resident education. Background: Prescription drug misuse is a growing problem among adolescent and young adult populations. Objectives: To determine factors associated with past year prescription drug misuse defined as using prescription sedatives, stimulants, or opioids to get high, taking them when they were prescribed to someone else or taking more than was prescribed among patients seeking care in an academic ED. Methods: Adolescents and young adults (14-20) presenting for ED care at a large, academic teaching hospital were approached to complete a computerized screening questionnaire regarding demographics, prescription drug misuse, illicit drug use, alcohol use, and violence in the past 12 months. Logistic regression was used to predict past year prescription drug misuse. Results: Over the study time period, there were 2156 participants (86% response rate) of whom 300 (13.9%) endorsed past year prescription drug misuse. Specifically, rates of past year misuse for opioids was 8.7%, sedatives was 5.4%, and stimulants was 8.0%. Significant overlap exists among classes with over 40% misusing more than one class of medications. In the multivariate analysis significant predictors of past year prescription drug misuse included female gender (OR Conclusion: Approximately one in seven adolescents or young adults seeking ED care have misused prescription drugs in the past year. While opioids are the most common drug misused, significant overlap exists among this population. Given the correlation of prescription drug misuse with the use and misuse of other substances (i.e. alcohol, cough medicine, marijuana) more research is needed to further understand these relationships and inform interventions. Additionally, future research should focus on understanding the differences in demographics and risk factors associated with misuse of each separate class of prescription drugs. Prospective 10 Objectives: This study aims to examine the association of depression with high ED utilization in patients with non-specific abdominal pain. Methods: This single-center, prospective, cross-sectional study was conducted in an urban academic ED located in Washington, DC as part of a larger study to evaluate the interaction between depression and frequency of ED visits and chronic pain. As part of this study, we screened patients using the PHQ-9, a nineitem questionnaire that is a validated, reliable predictor of major depressive disorder. We analyzed the subset of respondents with a non-specific abdominal pain diagnosis (ICD-9 code of 789.xx). Our principal outcome of interest was the rate of a positive depression screen in patients with non-specific abdominal pain. We analyzed the prevalence of a positive depression screen among this group and also conducted a chi-square analysis to compare high ED use among abdominal pain patients with a positive depression screen versus those without a positive depression screen. We defined high ED utilization as >3 visits in a 364-day period prior to the enrollment visit. Background: Numerous studies have found high rates of co-morbid mental illness and chronic pain in emergent care settings. One psychiatric diagnosis frequently associated with chronic pain is Major Depressive Disorder (MDD). Objectives: We conducted a study to characterize the relationship between MDD and chronic pain in the emergency department (ED) population. We hypothesized that patients who present to the ED with selfreported chronic pain will have higher rates of MDD. Methods: This was a single-center, prospective, crosssectional study. We used a convenience sample of noncritically ill, English speaking adult patients presenting with non-psychiatric complaints to an urban academic ED over 6 months in 2011. We oversampled patients presenting with pain-related complaints (musculoskeletal pain or headache). Subjects were surveyed about their demographic and other health and health care characteristics and were screened with the PHQ 9, a nine-item questionnaire that is a validated, reliable predictor of MDD. We conducted bivariate (chi-square) and multivariate analysis controlling for demographic characteristics (race, income, sex, age) using STATA v. 10.0. Our principal dependent variable of interest was a positive depression screen (PHQ 9 score ‡10). Our principal independent variable of interest was the presence of self-reported chronic pain (greater than 3 months). Results: Of 77 patients enrolled, 2 did not meet all inclusion criteria. 50 had two or more assessments for comparison. Their average age was 39 (range 21-59), 70% were male, and 74% were in police custody. 38% used methadone alone; 16% heroin alone; 4% oxycodone alone; and the rest used multiple opioids. The average dose of IM methadone was 10.3 mg (range 5-20 mg); all but 3 patients received 10 mg. The mean COWS score before receiving IM methadone was 11.19 (range 3-23), compared to 4.83 (range 0-20) 30 minutes after methadone (p < 0.001; mean difference = )6.36; 95% CI = )4.57 to )8.15). The mean WSS before and after methadone was )1.54 (range )1 to )2) and )0.755 (range )2 to 2), respectively (p < 0.001; 95% CI = )1.0 to )0.57). The mean physician-assessed WSS was significantly lower than the patient's own assessment by 0.78 (p < 0.001). Adverse events included an asthmatic patient with bronchospasm whose oxygen saturation decreased from 95% to 88% after receiving methadone, a patient whose oxygen saturation decreased from 95% to 93%, and two patients whose AMSS decreased from )1 to )2 (indicating moderate sedation). Background: As the US population ages, the coexistence of COPD and acute coronary syndrome (ACS) is expected to be more frequent. Very few studies have examined the effect of COPD on outcomes in ACS patients, and, to our knowledge, there has been no report on biomarkers that possibly mediate between COPD and long-term ACS patient outcomes. Objectives: To determine the effect of COPD on longterm outcomes in patients presenting to the emergency department (ED) with ACS and to identify prognostic inflammatory biomarkers. Methods: We performed a prospective cohort study enrolling ACS patients from a single large tertiary center. Hospitalized patients aged 18 years or older with ACS were interviewed and their blood samples were obtained. Seven inflammatory biomarkers were measured, including interleukin-6 (IL-6), C-reactive protein (CRP), tumor necrosis factor-alpha (TNF-alpha), vascular cell adhesion molecule (VCAM), E-selectin, lipoprotein-a (LP-a), and monocyte chemoattractant protein-1 (MCP-1). The diagnoses of ACS and COPD were verified by medical record review. Annual telephone follow-up was conducted to assess health status and major adverse cardiovascular events (MACE) outcomes, a composite endpoint including myocardial infarction, revascularization procedure, stroke, and death. Background: Aortic dissection (AD) is an uncommon life-threatening condition requiring prompt diagnosis and management. Thirty-eight percent of cases are missed upon initial evaluation. The cornerstone of accurate diagnosis hinges on maintaining a high index of clinical suspicion for the various patterns of presentation. Quality documentation that reflects consideration for AD in the history, exam, and radiographic interpretation is essential for both securing the diagnosis and for protecting the clinician in missed cases. Objectives: We sought to evaluate the quality of documentation in patients presenting to the emergency department with subsequently diagnosed acute AD. Methods: IRB-approved, structured, retrospective review of consecutive patients with newly diagnosed non-traumatic AD from 2004 to 2010. Inclusion criteria: New AD diagnosis via ED. Exclusion Criteria: AD diagnosed at another facility; chronic, traumatic, or iatrogenic AD. Trained/monitored abstractors used a standardized data tool to review ED and hospital medical records. Descriptive statistics were calculated as appropriate. Inter-rater reliability was measured. Our primary performance measure was the prevalence of a composite of all three key historical elements (1. any back pain, 2. neurologic symptoms including syncope, and 3. sudden onset of pain.) in the attending emergency physician's documentation. Secondary outcomes included documentation of: AD risk factors, pain quality, back pain at multiple locations, presence/absence of pulse symmetry, mediastinal widening on chest radiograph, and migratory nature of the pain. Results: 65/203 met our inclusion/exclusion criteria. The mean age was 58.4 years; 65% were male, 23 (35.4%) were Stanford A. 32 (60%) presented with a chief complaint of chest pain. Primary outcome measure: 6/65 (9.2%; 95%CI = 3.5,19.0) documented the presence/ absence of all three key historical elements. [back pain = 42/65; 64.6% (51.8, 76.1); neuro symptoms = 39/ 65; 60% (47.1, 72.0); sudden onset = 12/65; 18.5% (9.9, 30.0).] Limitations: Small number of confirmed AD cases. Conclusion: In our cohort, emergency physician documentation of key historical, physical exam, and radiographic clues of AD is suboptimal. Although our ED miss rate is lower than that which has been reported by previous authors, there is an opportunity to improve documentation of these pivotal elements at our institution. Objectives: This study assessed the opinions of IEM and GH fellowship program directors, in addition to recent and current fellows regarding streamlining the application process and timeline in an attempt to implement change and improve this process for program directors and fellows alike. Methods: A total of 34 current IEM and GH fellowship programs were found through an internet search. An electronic survey was administered to current IEM and GH fellowship directors, current fellows, and recent graduates of these 34 programs. Results: Response rates were 88% (n = 30) for program directors and 53% (n = 17) for current and recent fellows. The great majority of current and recent fellows (77%) and program directors (83%) support transitioning to a common application service. Similarly, 88% of current and recent fellows and 83% of program directors support instituting a uniform deadline date for applications. However, only 47% of recent/current fellows and 33% of program directors would support a formalized match process like NRMP. Conclusion: The majority of fellows and program directors support streamlining the application for all IEM and GH fellowship programs. This could improve the application process for both fellows and program directors, and ensure the best fit for the candidates and for the fellowship programs. In order to establish effective emergency care in rural Sub-Saharan Africa, the unique practice demographics and patient dispositions must be understood. Objectives: The objectives of this study are to determine the demographics of the first 500 patients seen at Nyakibale Hospital's ED and assess the feasibility of treating patients in a rural District Hospital ED in Sub-Saharan Africa. Methods: A descriptive cross-sectional analysis of the first 500 consecutive patient visits in the ED's patient care log was reviewed by an unblinded abstractor. Data collected included age, sex, condition upon discharge, and disposition. All authors discussed uncertainties and formed a consensus. Descriptive statistics were performed. Results: Of the first 500 patient visits, 254 (50.8%) occurred when the outpatient clinic was open. There were 275 (55%) male visits. The average age was 25.2 years (SD ± 22.2). Pediatric visits accounted for 218 (43.6%) patients, and 132 (26.4%) visits were for children under five years old. Only one patient expired in the ED, and 401 (80.2%) were in good condition after treatment, as subjectively defined by the ED physicians. One person was transferred to another hospital. After treatment, 180 (36%) patients were discharged home. Of those admitted to an inpatient ward, 126 (25.2%) patients were admitted to medical wards, 97 (19.4%) to pediatrics, and 60 (12%) to surgical. Only six (1.2 %) patients went directly to the operating theatre. Conclusion: This consecutive sample of patient visits from a novel rural district hospital ED in Sub-Saharan Africa included a broad demographic range. After treatment, most patients were judged to be in ''good condition'', and over one third of patients could be discharged after ED management. This sample suggests that it is possible to treat patients in an ED in rural Sub-Saharan Africa, even in cases where surgical backup and transfers to higher level of care are limited or unavailable. Background: Communication failures in clinical handoffs have been identified as a major preventable cause of patient harm. In Italy, advanced prehospital care is provided predominantly by physicians who work on ambulances in teams with either nurses or basic rescuers. The hand-offs from prehospital physicians to hospital emergency physicians (EPs) is especially susceptible to error with serious consequences. There are no studies in Italy evaluating the communication at this transition in patient care. Studying this, however, requires a tool that measures the quality of this communication. Objectives: The purpose of this study is to develop and validate a tool for the evaluation of communication during the clinical handoff from prehospital to emergency physicians in critically ill patients. Methods: Several previously validated tools for evaluating communication in hand-offs were identified through a literature search. These were reviewed by a focus group consisting of EPs, nurses, and rescuers, who then adapted and translated the Australian ISBAR (Identification, Situation, Background, Assessment, Recommendation), the tool most relevant to local practice. The Italian ISBAR tool consists of the following elements: patient and provider identification; patient's chief complaint; patient's past medical history, medications, and allergies; prehospital clinical assessment (primary survey, illness severity, vital signs, diagnosis); treatment initiated and anticipated treatment plan. We conducted and video-taped the hand-offs of care from the prehospital physicians to the EPs in 12 pediatric critical care simulations. Four physician raters were trained in the Italian ISBAR tool and used it to independently assess communication in each simulation. To assess agreement we calculated the proportion of agreement among raters for each ISBAR question, Fleiss' kappas for each simulation, as well as mean agreement and mean kappas with standard deviations. Results: There was 100% agreement among the four physicians on 70% of the items. The mean level of agreement was 91% (SD 0.15). The overall mean kappa was 0.67 (SD 0.10). Conclusion: The standardized tool resulted in good agreement by physician raters. This validated tool may be helpful in studying and improving hand-offs in the prehospital to emergency department setting. Objectives: We hypothesized that residents who were provided with VPs prior to HFs would perform more thoroughly and efficiently than residents who had not been exposed to the online simulation. Methods: We randomized a group of 30 residents from an academic, PGY 1-4 emergency medicine program to complete an online VPs case, either prior to (VPs group, n = 14 residents) or after (n = 16) their HFs case. The VPs group had access to the online case (which reviewed asthma management) 3 days prior to the HFs session. All residents individually participated in their regularly scheduled HFs and were blinded to the content of the case -a patient in moderate asthma exacerbation. The authors developed a dichotomous checklist consisting of 33 items recorded as done/not done along with time completed. A two sample proportion test was used to evaluate differences in the individual items completed between groups. A Wilcoxon Rank Sum test was used to determine the differences in overall and subcategory performance between the two groups. Median time to completion was analyzed using the log-rank test. Results: The VPs group had better overall checklist performance than the control group (p-value 0.046). In addition, the VPs group was more thorough in obtaining an HPI (p-value 0.009). Specific actions (related to asthma management) were performed better by the VPs group: inquiring about last/prior ED visits (0.038), total number of hospitalizations in the prior year (0.029), prior intubations (0.001), and obtaining peak flow measurements (0.030). Overall there was no difference in time to event completion between the two groups. Conclusion: We found that when HFs is primed with educational modalities such as VPs there was an improvement in performance by trainees. However, the improved completeness of the VPs group may have served as a barrier to efficiency, inhibiting our ability to identify a statistical significant efficiency overall. VPs may aid in priming the learners and maximize the efficiency of training using high-fidelity simulations. Training using an animal model helped develop residents' skills and confidence in performing PTV. Retention was found to be good at 2 months post-training. This study underscores the need for hands-on training in rare but critical procedures in emergency medicine. Methods: In this cross-sectional study at an urban community hospital, 15 residents in their second or third year of training from a 3-year EM residency program performed US-guided catheterizations of the IJ on a simulator manufactured by Blue Phantom. Two board-certified EM physicians observed for the completion of pre-defined procedural steps using a checklist and rated the residents' overall performance of the procedure. Overall performance ratings were provided on a Likert scale of 1 to 10, with 1 being poor and 10 being excellent. Residents were given credit for performing a procedural step if at least one rater marked its completion. Agreement between raters was calculated using intraclass correlation coefficients for domain and summary scores. The same protocol was then repeated on an unembalmed cadaver using two different board-certified EM physician raters. Criterion validity of the residents' proficiency on the simulator was evaluated by comparing their median overall performance rating on the simulator to that on the cadaver and by comparing the proportion of residents completing each procedural step between modalities with descriptive statistics. Results: EM residents' overall performance rating on the simulator was 7.4 (95% CI: 6.0 to 8.8) and on the cadaver was 6.1 (95% CI: 4.7 to 7.5). The results for each procedural step are summarized in the attached figure. Inter-rater agreement was high for assessments on both the simulator and cadaver with overall kappa scores of 0.89 and 0.96 respectively. Background: The environment in the emergency department (ED) is chaotic. Physicians must learn how to multi-task effectively and manage interruptions. Noise becomes an inherent byproduct of this environment. Previous studies in the surgical and anesthesiology literature examined the effect of noise levels and cognitive interruptions on resident performance during simulated procedures; however, the effect of noise distraction on resident performance during an ED procedure has not yet been studied. Objectives: Our aim was to prospectively determine the effects of various levels of noise distraction on the time to successful intubation of a high-fidelity simulator. Methods: A total of 45 emergency medicine, emergency medicine/internal medicine, and emergency medicine/family medicine residents were studied in a background noise environments of less than 50 decibels (noise level 1), 60-70 decibels (noise level 2), and of greater than 70 decibels (noise level 3). Noise levels were standardized by a dosimeter (Ex Tech Instruments, Heavy Duty 600). Each resident was randomized to the order in which he or she was exposed to the various noise levels and had a total of 2 minutes to complete each of the intubation attempts, which were performed in succession. Time, in seconds, to successful intubation was measured in each of these scenarios with the start time defined as the time the resident picked up the STORZ C-MAC video laryngoscope blade and the finish time defined as the time the tube passed through the vocal cords as visualized by an observer on the STORZ C-MAC video screen. Analytic methods included analysis of variance, Student's t-test, and Pearson's chi-square. Results: No significant differences were found between time to intubation and noise level nor did the order of noise level exposure affect the time to intubation (see table) . There were no significant differences in success rate between the three noise levels (p = 0.178). A significant difference in time to intubation was found between the residents' second and third intubation attempts with decreased time to intubation for the third attempt (p = 0.001). Conclusion: Noise level did not have an effect on time to intubation or intubation success rate. Time to intubation decreased between the second and third intubations regardless of noise level. Background: Growing use of the emergency department (ED) is cited as a cause of rising health care costs and a target of health care reform. EDs provide approximately one quarter of all acute care outpatient visits in the US. EDs are a diagnostic center and a portal for rapid inpatient admission. The changing role of EDs in hospital admissions has not been described. Objectives: To compare if admission through the ED has increased compared to direct hospital admission. We hypothesized that the use of the ED as the admitting portal increased for all frequently admitted conditions. Methods: We analyzed the Nationwide Inpatient Sample (NIS), the largest US all-payer inpatient care database, from 1993-2006. NIS contains data from approximately 8 million hospital stays each year, and is weighted to produce national estimates. We used an interactive, webbased data tool (HCUPnet) to query the NIS. Clinical Classification Software (CCS) was used to group discharge diagnoses into clinically meaningful categories. We calculated the number of annual admissions and proportion admitted from the ED for the 20 most frequently admitted conditions. We excluded CCS codes that are rarely admitted through the ED (<10%) as well as obstet- Background: The optimal dose of opioids for patients in acute pain is not well defined, although 0.1 mg/kg of IV morphine is commonly recommended. Patient-controlled analgesia (PCA) provides an opportunity to assess the adequacy of this recommendation as use of the PCA pump is a behavioral indication of insufficient analgesia. Objectives: To assess the need for additional analgesia following a 0.1 mg/kg dose of IV morphine by measuring additional self-dosing via a PCA pump. Methods: A three-arm randomized controlled trial was performed in an urban ED with 75,000 annual adult visits. A convenience sample of ED patients ages 18 to 65 with abdominal pain of <7 days duration requiring IV opioids was enrolled between 4/2009 and 6/2010. All patients received an initial dose of 0.1 mg/kg IV morphine. Patients in the PCA arms could request additional doses of 1 mg or 1.5 mg IV morphine by pressing a button attached to the pump with a 6-minute lock-out period. For this analysis, data from both PCA arms were combined. Software on the pump recorded times when the patient pressed the button (activation) and when he/she received a dose of morphine (successful activation). Results: 137 patients were enrolled in the PCA arms. Median baseline NRS pain score was 9. Mean amount of supplementary morphine self-administered over the 2 hour study period subsequent to the loading dose was 5.7 mg and 6.7 mg for the 1 and 1.5 mg PCA groups respectively. 124 patients activated the pump at least once (91%, 95% CI: 84 to 94%). Figure 1 shows the frequency distribution of the number of times the pump was activated. Of those who activated the pump, the median number of activations per person was 5 (IQR: 3 to 12). There were 1124 activations of the pump. 60% of activations were successful (followed by administration of morphine), while 40% were unsuccessful as they occurred during the 6-minute lock-out periods. 19% of the activations occurred in the first 30 minutes, 29% in the second 30 minutes, 25% in the third 30 minutes, and 27% in the last 30 minutes after the initial loading dose. Conclusion: Almost all patients requested supplementary doses of PCA morphine, half of whom activated the pump five times or more over a course of 2 hours. This frequency of PCA activations suggests that the commonly recommended dose of 0.1 mg/kg morphine may constitute initial oligoanalgesia in most patients. Marie-Pier Desjardins, Benoit Bailey, Fanny Alie-Cusson, Serge Gouin, Jocelyn Gravel CHU Sainte-Justine, Montreal, QC, Canada Background: Administration of corticosteroid at triage has been suggested to decrease the time to corticosteroid administration in the ED. Objectives: To compare the time between arrival and corticosteroid administration in patients treated with an asthma pathway (AP) or with standard management (SM) in a pediatric ED. Methods: Chart review of children aged 1 to 17 years diagnosed with asthma, bronchospasm, or reactive airways disease seen in the ED of a tertiary care pediatric hospital. For a one year period, 20% of all visits were randomly selected for review. From these, we reviewed patients who were eligible to be treated with the AP ( ‡18 months with previous history of asthma and no other pulmonary condition) and who had received at least one inhaled bronchodilator treatment. Charts were evaluated by a data abstractor blinded to the study hypothesis using a standardized datasheet. Various variables were evaluated such as age, respiratory rate and 0 2 saturation at triage, type of physician who saw patient first, treatment prior to visit, in ED, and at discharge, time between arrival and corticosteroid administration, and length of stay (LOS Background: Return visits comprise 3.5% of pediatric emergency department (PED) visits, at a cost of >$500 million/year nationally. These visits are typically triaged with higher acuity and admission rates and raise concern for lapses in quality of care and patient education during the first visit. Objectives: The aim of this qualitative study was to describe parents' reasons for return visits to the PED. Methods: We prospectively recruited a convenience sample of parents of patients under the age of 18 years who returned to the PED within 72 hours of their previous visit. We excluded patients who were instructed to return, had previously left without being seen, arrived without a parent, were wards of the state, or did not speak English. After obtaining consent, the principal investigator (CE) conducted confidential, in-person, tape-recorded interviews with parents during PED return visits. Parents answered 12 open-ended questions and 9 closed-ended questions using a five-point Likert scale. Responses to open-ended questions were analyzed using thematic analysis techniques. The scaled responses were grouped into three categories of agree, disagree, or neutral. Results: From the 49 closed-ended responses, 86% of parents agreed that their children were getting sicker, and 92% agreed that their children were not getting better. 80% agreed that they were unsure how to treat the illness, however only 41% agreed they did not feel Figure 1 : Frequency distribution of number of PCA activations comfortable taking care of the illness. Only 29% agreed that the medical condition and/or the instructions were not clearly explained in the first visit. Some common themes from the open-ended questions included worsening or lack of improvement of symptoms. Many parents reported having unanswered questions about the cause of the illness and hoped to find out the cause during the return visit. Conclusion: Most parents brought their children back to the PED because they believed the symptoms had worsened or were not improving. Although a large proportion of parents believed that the medical condition was clearly explained at the first visit, many parents still had unanswered questions about the cause of their child's illness. While worsening symptoms seemed to drive most return visits, it is possible that some visits related to failure to improve might be prevented during the first PED visit through a more detailed discussion of disease prognosis and expected time to recover. Pediatric Background: Experience indicates that it is difficult to effectively quell many parents' anxiety toward pediatric fevers, making this a common emergency department (ED) complaint. The question remains as to whether athome treatment has any effect on the course of emergency department treatment or length of stay in this population. Objectives: To determine whether anti-pyretic treatment prior to arrival in the emergency department affects the evaluation or emergency department length of stay of febrile pediatric patients. Methods: A convenience sample of children, ages 0-12 years, who presented to a tertiary care ED with chief complaint of fever were enrolled. Parents were asked to participate in an eight-question survey. Questions related to demographic information, pre-treatment of the fever, contact with primary care providers prior to ED arrival, and immunization status. Upon admission or discharge, investigators recorded information regarding length of stay, laboratory tests and imaging ordered, and medications given. Results: Eighty-one patients were enrolled in the study. Seventy-six percent of the patients were pre-treated with some form of anti-pyretic by the caregiver prior to ED arrival. There was no significant effect of pre-treatment on whether laboratory tests or medications were ordered in the ED or whether the patient was admitted or discharged. The length of ED stay was found to be significantly shorter among those who received anti-pyretics prior to arrival (184 ± 11 vs. 247 ± 36 minutes; p = 0.03). Conclusion: Among febrile children, those who receive anti-pyretics prior to their ED visit had statistically significant shorter length of stays. This also supports implementation of triage or nursing protocols to administer an anti-pyretic as soon as possible in the hope of decreasing ED throughput times. Background: During the past two decades, the prevalence of overweight (BMI percentile >95) in children has more than doubled, reaching epidemic proportions both nationally and globally. The public health burden is enormous given the increased risk of adult obesity as well as the adverse consequences on cardiovascular, metabolic, and psychological health. Despite the overwhelming prevalence, the effect of obesity on emergency care has received little attention. Objectives: The goal of this study is to determine the relation of weight on reported emergency department visits in children from a nationally representative sample. Methods: Weight (as reported by parents) and height along with frequency of and reason for emergency department (ED) use in the last 12 months were obtained from children aged 10-17 y (n = 46,707) in the cross-sectional, telephone-administered, National Survey of Children's Health (NSCH). BMI percentiles were calculated using sex-specific BMI for age growth charts from the CDC (2000). Children were categorized as: underweight (BMI percentile£5), normal weight (>5 to <85), at-risk for overweight (85 to <95), and overweight ( ‡95). Prevalence of ED use was estimated and compared across BMI percentile categories using chisquare analysis and multivariable logistic regression. Taylor-series expansion was used for variance estimation of the complex survey design. Results: The prevalence of at least one ED use in the past 12 months increased with increasing BMI percentiles (figure 1, p < 0.001). Additionally, overweight children were more likely to have more than one visit. Overweight children were also less likely to report an injury, poisoning, or accident as the reason for ED visit compared to other BMI categories (47, 55, 59, 54% in overweight, at-risk, normal, and underweight respectively, p < 0.05). Conclusion: As rates of childhood obesity continue to grow in the U.S., we can expect greater demands on the ED. This will likely translate into an increased emphasis on the care of chronic conditions rather than injuries and accidents in the pediatric ED setting. Results: Mean pediatric satisfaction score was 84.1 (SD 3.9) compared with 81.4 (3.2) for adult patients (P < 0.001); monthly sample sizes ranged from 14-74 and from 30-125 for the two populations, respectively. Both populations showed an increase in satisfaction after opening of the PED-ED. For both populations there was no significant trend in patient satisfaction from the beginning of the study period to the opening of the PED-ED, but after the opening the models of the populations differed. The pediatric satisfaction model was an interrupted two-slope model, with an immediate jump of 3.5 points in November and an increase of 0.2 points per month thereafter. In contrast, adult satisfaction scores did not show a jump but increased linearly (two slope model) after 11/2011 at a rate of 0.3 per month. Prior to the opening of the PED-ED, mean monthly pediatric and adult satisfaction scores were 81.5 (2.4) and 79.5 (2.8), respectively (difference 2.0 95% CI 0.1-3.8, P = 0.04). After the opening the mean scores were 86.8 (3.1) and 83.2 (2.4), respectively (difference 3.6, 95% CI 2.1-5.0, P < 0.001). Conclusion: Opening of a dedicated PED-ED was associated with a significant increase in patient satisfaction scores both for children and adults. Patient satisfaction for children, as compared to adults, was higher before and after opening a PED-ED. The Background: There are racial disparities in outcomes among injured children. In particular, black race appears to be an independent predictor of mortality. Objectives: To evaluate disparities among ED visits for unintentional injuries among children ages 0-9. Methods: Five years of data (2004) (2005) (2006) (2007) (2008) from the National Hospital Ambulatory Cares Survey were combined. Inclusion criteria were defined as unintentional injury visits (e-code 800.0 to 869.9 or 888.0 to 929.9) and age 0-9 years. Visit rates per 100 population (defined by the US Census) were calculated by race and age group. Weighted multivariate logistic regression analysis was performed to describe associations between race and specific outcome variables and related covariates. Primary statistical analyses were performed using SAS version 9.1.3. Results: 21,524,000 of 585,294,000 weighted ED visits met our inclusion criteria (3.7%). Per 100 persons, black children had 1.5 times as many ED visits for unintentional injuries as whites (Table) . There were no racial differences in the sex ratio (1.4 boy visits: 1 girl), proportion of visits by age, ED disposition, immediacy with which they needed to be seen, whether or not they were evaluated by an attending physician, metropolitan vs. rural hospital, admission length of stay, mode of transportation for ED arrival, number of procedures, diagnostic services, or ED medications. Background: Sudden cardiac arrests in schools are infrequent, but emotionally charged events. Little data exist that describes AED use in these events. Objectives: The purpose of our study was to 1) describe characteristics and outcomes of school cardiac arrests (CA), and 2) assess the feasibility of conducting bystander interviews to describe the events surrounding school CA. Methods: We performed a telephone survey of bystanders to CA occurring in K-12 schools in communities participating in the Cardiac Arrest Registry to Enhance Survival (CARES) database. The study period was from 8/2005-12/2010 and continued in one community through 2011. Utstein style data and outcomes were collected from the CARES database. A structured telephone interview of a bystander or administrative personnel was conducted for each CA. A descriptive summary was used to assess for the presence of an AED, provision of bystander CPR (BCPR), and information regarding AED deployment, training, and use and perceived barriers to AED use. Descriptive data are reported. Results: During the study period there were 30,603 CA identified at CARES communities, of which 73 were identified as educational institutions. Of these, 46 (0.15%) events were at K-12 schools with 21 (45.7%) being high schools. Of the 46 arrests, a minority were children (15 (32.6%) < age 19), most (32, 84.8%) were witnessed, a majority (36, 76.1%) received BCPR, and 26 (56.5%) were initially in ventricular fibrillation (VF). Most arrests 28/40 (70%) occurred during the school day (7a-5p). Overall, 14 (30.4%) survived to hospital discharge. Interviews were completed for 29 of 46 (63.0%) K-12 events. Eighteen schools had an AED on site. Most schools (84.2%) with AEDs reported that they had a training program and personnel identified for its use. An AED was applied in 10 of 18 patients, and of these 8 were in VF and 4 survived to hospital discharge. Multiple reasons for AED non-use (n = 8) were identified. Conclusion: Cardiac arrests in schools are rare events; most patients are adults and received BCPR. AED use was infrequent, even when available, but resulted in excellent (4/10) survival. Further work is needed to understand AED non-use. Post-event interviews are feasible and provide useful information regarding cardiac arrest care. Physician Background: Gastroenteritis is a common childhood disease accounting for 1-2 million annual pediatric emergency visits. Current literature supports the use of anti-emetics reporting improved oral re-hydration, cessation of vomiting, and reduced need for IV re-hydration. However, there remains concern that using these agents may mask alternative diagnoses. Objectives: To assess outcomes associated with use of a discharge action plan using ED-dispensed ondansetron at home in the treatment of gastroenteritis. Methods: A prospective, controlled, observational trial of patients presenting to an urban pediatric emergency department (census 22,400) over a 12-month period for acute gastroenteritis. Fifty patients received ondansetron in the ED. Twenty-nine patients were enrolled in the Pediatric Emergency Department Discharge Action Plan (PED-DAP) where ondansetron for home use was dispensed by the treating clinician. Twenty-one patients were controls. Control patients did not receive home ondansetron. PED-DAP patients were given instructions to administer the ondansetron for ongoing symptoms any time 6 hours post ED discharge. All patients were followed by phone at 7-14 days to assess for the following: time of emesis resolution, alternative diagnoses, unscheduled visits, and adverse events. Results: All 50 patients were followed by phone. 24/29 PED-DAP patients received home ondansetron. 21/29 patients had resolution of emesis in the ED. 7/29 had resolution of their emesis between time of discharge and 24 hours. 1/29 of PED-DAP patients reported emesis after 24 hours from ED discharge. Five patients reported an unscheduled visit. All five return visits returned to the ED (1/5 returned for emesis, 4/5 for diarrhea). 17/21 controls reported resolution of symptoms within the ED. 2/21 of controls had resolution between time of discharge and 24 hours. 1/21 of the control patients had resolution with between 24 and 48 hours post discharge. 1/21 had an unscheduled appointment with the PMD at 72 hours post-discharge for ongoing fever and nausea. In follow-up there were no alternative diagnoses identified. The effect of the PED-DAP on resolution of emesis between discharge and 24 hours appears to be statistically significant (P value < 0.04). Conclusion: Ondansetron given in schedule with a discharge action plan appears to provide a modest benefit in resolution of symptoms relative to a control population. Objectives: To determine the repeatability coefficient of a 100 mm VAS in children aged 8 to 17 years in different circumstances: assessments done either at 3 or 1 minute interval, when asked to recall their score or to reproduce it. Methods: A prospective cohort study was conducted using a convenience sample of patients aged 8 to 17 years presenting to a pediatric ED. Patients were asked to indicate, on a 100 mm paper VAS, how much they liked a variety of food with four different sets of three questions: (set 1) questions at 3 minute interval with no specific instruction other than how to complete the VAS and no access to previous scores, (set 2) same format as set 1 except for questions at 1 minute interval, (set 3) same as set 1 except patients were asked to remember their answers, and (set 4) same as set 1 except patients were shown their previous answers. For each set, the repeatability coefficient of the VAS was determined according to the Bland-Altman method for measuring agreement using repeated measures: 1.96 X Ö 2 X s w where s w is the within-subject standard deviation by ANOVA. The sample size required to estimate s w to 10% of the fraction value as recommended was 96 patients if we obtained three measurements for each patient. Results: A total of 100 patients aged 12.1 ± 2.4 years were enrolled. The repeatability coefficient for the questions asked at 3 minute intervals was 12 mm, and 8 mm when asked at 1 minute interval. When asked to remember their previous answers or to reproduce them, the repeatability coefficient for the questions was 7 mm and 6 mm, respectively. Conclusion: The condition of the assessments (variation in intervals or patients asked to remember or to reproduce their previous answers) influence the testretest reliability of the VAS. Depending on circumstances, the theoretical test-retest reliability in children aged 8 to 17 years varies from 6 to 12 mm on a 100 mm paper VAS. Background: Skull radiographs are a useful tool in the evaluation of pediatric head trauma patients. However, there is no consensus on the ideal number of views that should be obtained as part of a standard skull series in the evaluation of pediatric head trauma patients. Objectives: To compare the sensitivity and specificity of a two-and four-film x-ray series in the diagnosis of skull fracture in children, when interpreted by pediatric emergency medicine physicians. Methods: A prospective, crossover experimental study was performed in a tertiary care pediatric hospital. The skull radiographs of 100 children were reviewed. These were composed of the 50 most recent cases of skull fracture for which a four-film radiography series was available at the primary setting and 50 controls, matched for age. Two modules, containing a random sequence of two-and four-film series of each child, were constructed in order to have all children evaluated twice (once with two films and once with four films). Board-certified or -eligible pediatric emergency physicians evaluated both modules two to four weeks apart. The interpretation of the four-film series by a radiologist, or when available, the findings on CT scan, served as the gold standard. Accuracy of interpretation was evaluated for each patient. The sensitivity and specificity of the two-film versus the four-film skull xray series, in the identification of fracture, were compared. This was a non-inferiority cross-over study evaluating the null hypothesis that a series with two views would have a sensitivity (specificity) that is inferior by no more than 0.055 compared to a series with four views. A total of 50 controls and 50 cases were needed to establish non-inferiority of the two-film series versus the four-film series, with a power of 80% and a significance level of 5%. Results: Ten pediatric emergency physicians participated in the study. For each radiological series, the proportion of accurate interpretation varied between 0.20 to 1.00. The four-film series was found to be more sensitive in the detection of skull fracture than a two-film series (difference: 0.084, 95%CI 0.030 to 0.139). However, there was no difference in the specificity (difference: 0.004, 95%CI )0.024 to 0.033). Conclusion: For children sustaining a head trauma, a four-film skull radiography series is more sensitive than a two-film series, when interpreted by pediatric emergency physicians. The Objectives: We developed a free online video-based instrument to identify knowledge and clinical reasoning deficits of medical students and residents for pediatric respiratory emergencies. We hypothesized that it would be a feasible and valid method of differentiating educational needs of different levels of learners. Methods: This was an observational study of a free, web-based needs assessment instrument that was tested on 44 third and fourth year medical students (MS3-4) and 29 pediatric and emergency medicine residents (R1-3). The instrument uses YouTube video triggers of children in respiratory distress. A series of cased-based questions then prompts learners to distinguish between upper and lower airway obstruction, classify disease severity, and manage uncomplicated croup and bronchiolitis. Face validity of the instrument was established by piloting and revision among a group of experienced educators and small groups of targeted learners. Final scores were compared across groups using t-tests to determine the ability of the instrument to differentiate between different levels of learners (concurrent validity). Cronbach's alpha was calculated as a measure of internal consistency. Results: Response rates were 19% among medical students and 43% among residents. The instrument was able to differentiate between junior (MS3, MS4, and R1) and senior (R2, R3) learners for both overall mean score (61% vs.78%, P < 0.01) and mean video portion score (74 vs. 84%, p = 0.02). Table 1 compares results of several management questions between junior and senior learners. Cronbach's alpha for the test questions was 0.47. Conclusion: This free online video-based needs assessment instrument is feasible to implement and able to identify knowledge gaps in trainees' recognition and management of pediatric respiratory emergencies. It demonstrates a significant performance difference between the junior and senior learners, preliminary evidence of concurrent validity, and identifies target groups of trainees for educational interventions. Future revisions will aim to improve internal consistency. Results: The survey response rate was 87% (60/69). Among responding programs, 40 (67%) reside within a children's hospital (vs. general ED); 51 (85%) are designated Level I pediatric trauma centers. Forty-three (72%) programs accept 1-2 PEM fellows per year; 53 (88%) provided at least some EUS training to fellows, and 42 (70%) offer a formal EUS rotation. On average this training has existed for 3 ± 1 years and the mean duration of EUS rotations is 4 ± 2 weeks. Twenty-eight (67%) programs with EUS rotations provide fellow training in both a general ED and a pediatric ED. There were no hospital or program level factors associated with having a structured training program for PEM fellows. Conclusion: As of 2011, the majority of PEM fellowship programs provide EUS training to their fellows, with a structured rotation being offered by most of these programs. Background: ED visits are an opportunity for clinicians to identify children with poor asthma control and intervene. Children with asthma who use EDs are more likely than other children to have poor control, not be using controller medications, and have less access to traditional sources of primary care. One significant barrier to ED-based interventions is recognizing which children have uncontrolled asthma. Objectives: To determine whether the PACCI, a 12item parent-administered questionnaire, can help ED clinicians better recognize patients with the most uncontrolled asthma and differentiate between intermittent and persistent asthma. Methods: This was a randomized controlled trial performed at an urban pediatric ED. Parents were asked to answer questions about their child's asthma including drug adherence and history of exacerbations, as well as answer demographic questions. Using a convenience sample of children 1-18 years presenting with an asthma exacerbation, attending physicians in the study were asked to complete an assessment of asthma control. Physicians were randomized to receive a completed PACCI (intervention) or not (control group). Using an intent-to-treat approach, clinicians' ability to accurately identify 1) four categories of control used by the National Heart, Lung, and Blood Institute (NHLBI) asthma guidelines, 2) intermittent vs. persistent level asthma, and 3) controlled / mildly uncontrolled vs. moderate/severely uncontrolled asthma were compared for both groups using chi-square analysis. Results: Between January and August 2011, 57 patients were enrolled. There were no statistically significant differences between the intervention and control groups for child's sex, age, race and parents' education. Conclusion: The PACCI improves ED clinicians' ability to categorize children's asthma control according to NHLBI guidelines, and the ability to determine when a child's control has been worsening. ED clinicians may use the PACCI to identify those children in greatest need for intervention, to guide prescription of controller medications, and communicate with primary care providers about those children failing to meet the goals of asthma therapy. Figure) . Fewer than half of physicians reported the parent of a 2-year-old being discharged from their ED following an MVC-related visit would receive either child passenger safety information or referrals (Table) . Conclusion: Emergency physician report of child passenger safety resource availability is associated with trauma center designation. Even when resources are available, referrals from the ED are infrequent. Efforts to increase referrals to community child passenger safety resources must extend to the community ED settings where the majority of children receive injury care. Background: Pediatric subspecialists are often difficult to access following ED care especially for patients living far from providers. Telemedicine (TM) can potentially eliminate barriers to access related to distance, and cost. Objectives: To evaluate the overall resource savings and access that a TM program brings to patients and families. Methods: This study took place at a large, tertiary care regional pediatric health care system. Data were collected from 1/2011-10/2011. Metrics included travel distance saved (round trip between TM presenting sites and the location of the receiving sites), time savings, direct cost savings (based on $0.55/mile) and potential work and school days saved. Indirect costs were calculated as travel hrs saved/encounter (based on an average speed of 55 miles/hr). Demographics and services provided were included. Results: 690 TM consults were completed by 13 separate pediatric subspecialty services. Most patients were school aged (86% >/= 5yrs old Objectives: To analyze test characteristics of the pathway and its effects on ED length of stay, imaging rates, and admission rate before versus after implementation. Methods: Children ages 3-18 presenting to one academic pediatric ED with suspicion for appendicitis from October 2010 -August 2011 were prospectively enrolled to a pathway using previously validated lowand high-risk scoring systems. The attending physician recorded his or her suspicion of appendicitis and then used one of two scoring systems incorporating history, physical exam, and CBC. Low-risk patients were to be discharged or observed in the ED. High-risk patients were to be admitted to pediatric surgery. Those meeting neither low-nor high-risk criteria were evaluated in the ED by pediatric surgery, with imaging at their discretion. Chart review and telephone follow-up were conducted two weeks after the visit. Charts of a random sample of patients with diagnoses of acute appendicitis or chief complaint of abdominal pain and undergoing a workup for appendicitis in the eight months before and after institution of the pathway were retrospectively reviewed by one or two trained abstractors. Results: Appendicitis was diagnosed in 65 of 178 patients prospectively enrolled to the pathway (37%). Mean age was 9.6 years. Of those with appendicitis, 63 were not low-risk (sensitivity 96.9%, specificity 48.7%). The high-risk criteria had a sensitivity of 73.8% and specificity of 77.0%. A priori attending physician assessment of low risk had a sensitivity of 100% and specificity of 49.6%. A priori assessment of high risk had a sensitivity of 58.5% and specificity of 90.2%. We reviewed 232 visits prior to the pathway and 290 after. Mean ED length of stay was similar (256 minutes before versus 257 after). CT was used in 12.1% of visits before and 7.3% after (p = 0.07). Use of ultrasound increased (44.8% before versus 55.9% after, p < 0.02). Admission rates were not significantly different (48.3% before versus 42.7% after, p = 0.2). Conclusion: The low-risk criteria had good sensitivity in ruling out appendicitis and can be used to guide physician judgment. Institution of this pathway was not associated with significant changes in length of stay, utilization of CT, or admission rate in an academic pediatric ED. Computer-delivered Alcohol And Driver Safety Behavior Screening And Intervention Program Initiated During An Emergency Department Visit Mary K. Murphy 1 , Lucia L. Smith 2 , Anton Palma 2 , David W. Lounsbury 2 , Polly E. Bijur 2 , Paul Chambers 2 1 Yale University, New Haven, CT; 2 Albert Einstein College of Medicine, Bronx, NY Background: Alcohol use is involved in 32 percent of all fatal motor vehicle crashes and recent estimates show that at least 448,000 people were injured due to distracted driving last year. Patients who visit the emergency department (ED) are not routinely screened for driver safety behavior; however, large numbers of patients are treated in the ED every day creating an opportunity for screening and intervention on important public health behaviors. Objectives: To evaluate patient acceptance and response to a computer-based traffic safety educational intervention during an ED visit and one month follow-up. Methods: DESIGN. Pre /post educational intervention. SETTING. Large urban academic ED serving over 100,000 patients annually. PARTICIPANTS. Medically stable adult ED patients. INTERVENTION. Patients completed a self-administered, computer-based program that queried patients on alcohol use and risky driving behaviors (texting, talking, and other forms of distracted driving). The computer provided patients with educational information on the dangers of these behaviors and collected data on patient satisfaction with the program. Staff called patients one month post ED visit for a repeat query. Results: 150 patients participated; average age 39 (21-70), 58% Hispanic, 52% male. 96% of patients reported the program was easy to use and were comfortable receiving this education via computer during their ED visit. Self-reported driver safety behaviors pre, post intervention (% change): driving while talking on the phone 45%,16% ()29%, p = 0.001), aggressive driving 44%,15% ()29%, p = 0.001), texting while driving 28%,9% ()19%, p = 0.001), driving while drowsy 18%,4% ()14%, p = 0.002), drinking in excess of NIH safe drinking guidelines15%,%7 ()8%, p = 0.039), drinking and driving 10%,1% ()9%, p = 0.006). Conclusion: We found a high prevalence of selfreported risky driving behaviors in our ED population. At 1 month follow-up, patients reported a significant decrease in these behaviors. Overall patients were very satisfied receiving educational information about these behaviors via computer during their ED visit. This study indicates that a low-intensity, computer-based educational intervention during an ED visit may be a useful approach to educate patients about safe driving behaviors and promote behavior change. Prevalence of Depression among Emergency Department Visitors with Chronic Illness Janice C. Blanchard, Benjamin L. Bregman, Jeffrey Smith, Mohammad Salimian, Qasem Al Jabr George Washington University, Washington, DC Background: Persons with chronic illnesses have been shown to have higher rates of depression than the general population. The effect of depression on frequent emergency department (ED) use among this population has not been studied. Objectives: This study evaluated the prevalence of major depressive disorder (MDD) among persons presenting with depression to the George Washington University ED. We hypothesized that patients with chronic illnesses would be more likely to have MDD than those without. Methods: This was a single center, prospective, crosssectional study. We used a convenience sample of noncritically ill, English-speaking adult patients presenting with non-psychiatric complaints to an urban academic ED over 6 months in 2011. Subjects were screened with the PHQ 9, a nine-item questionnaire that is a validated, reliable predictor of MDD. We also queried respondents about demographic characteristics as well as the presence of at least one chronic disease (heart disease, hypertension, asthma, diabetes, HIV, cancer, kidney disease, or cerebrovascular disease). We evaluated the association between MDD and chronic illnesses with both bivariate analysis and multivariate logistic regression controlling for demographic characteristics (age, race, sex, income, and insurance coverage). Results: Our response rate was 90.7% with a final sample size of 1012. Of our total sample, 525 (51.9%) had at least one of the chronic illnesses defined above. Of this group, 162 (30.9%) screened positive for MDD as compared to 82 (16.6%) of the group without chronic illnesses (p < 0.0001). In multivariate analysis, persons with chronic illnesses had an odds ratio for a positive depression screen of 1.80 (1.31, 2.50) as compared to persons without illness. Among the subset of persons with chronic illnesses (n = 525), 46.9% had ‡3 visits in the prior 364 days as compared to 34.4% of persons with chronic illnesses without MDD (p = 0.007). Conclusion: Our study found a high prevalence of untreated MDD among persons with chronic illnesses who present to the ED. Depression is associated with more frequent emergency department use among this population. Initial Blood Alcohol Level Aids CIWA in Predicting Admission for Alcohol Withdrawal Craig Hullett, Douglas Rappaport, Mary Teeple, Daniel Butler, Arthur Sanders University of Arizona, Tucson, AZ Background: Assessment of alcohol withdrawal symptoms is difficult in the Emergency department. The Clinical Institute Withdrawal Assessment (CIWA) is commonly used, but other factors may also be important predictors of withdrawal symptom severity. Objectives: The purpose of this study is to determine whether CIWA score at presentation to triage was predictive of later admission to the hospital. Methods: A retrospective study of patients presenting to an acute alcohol and drug detoxification hospital was performed from July 2010 through January 2011. Patients were excluded if other drug withdrawal was present in addition to alcohol. Initial assessment included age, sex, vital signs, and blood alcohol level (BAL) in addition to hourly CIWA score. Admission is indicated for a CIWA score of 10 or higher. Data were analyzed by selecting all patients not immediately admitted at initial presentation. Logistic regression using Wald's criteria for stepwise inclusion was used to determine the utility of the initially gathered CIWA, BAL, longest sobriety, liver cirrhosis, and vital signs in predicting subsequent admission. Results: There were 123 patients who fit the inclusion criteria, with 9 admitted for treatment at initial intake and another 27 admitted during the following 10 hours. Logistic regression indicated that presenting BAL was a strong predictor (p = 0.01) of admission for treatment after initial presentation, as was presenting CIWA (p = 0.03). Thus, presenting BAL provided a substantial addition above initial CIWA in predicting later admission. No other variables added significantly to the prediction of later admission. To determine the interaction between presenting BAL and CIWA scores, we ran a repeated measures analysis of the first five CIWA scores (from presentation to 4 hours later), using BAL split into low (BAL < 0.10) and high (BAL > 0.10) groups (see figure) . Their interaction was significant, F (1, 93) = 11.86, p < 0.001, g 2 = 0.11. Those presenting with higher initial BAL had suppressed CIWA scores that rose precipitously as the alcohol cleared. Those with low presenting BAL showed a decline in CIWA over time Conclusion: Initial assessment using the common assessment tool CIWA is aided significantly by BAL assessment. Patients with higher presenting BAL are at higher risk for progression to serious alcohol withdrawal symptom. Objectives: To describe patient and visitor characteristics and perspectives on the role of visitors in the ED and determine the effect of visitors on ED and hospital outcome measures. Methods: This cross-sectional study was done in an 81,000-visit urban ED, and data were attempted to be collected from all patients over a consecutive 96-hour period from August 25 to 28, 2011. Trained data collectors were assigned to the ED continuously for the study period. Patients assigned to a rapid care section of the ED (24%) were excluded. A visitor was defined as a person other than a health care provider (HCP) or hospital staff present in a patient's room at any time. Patient perspectives on visitors were assessed in the following domains: transportation, emotional support, physical care, communication, and advocating for the patient. ED and hospital outcome measures pertaining to ED length of stay (LOS) and charges, hospital admission rate, hospital LOS and charges were obtained from patient medical records and hospital billing. Data analyses included frequencies, Student's t-tests for continuous variables, and chi-square tests of association for categorical variables. All tests for significance were two-sided. Objectives: To examine the effect of Sunday alcohol availability on ethanol-related visits and alcohol withdrawal visits to the ED. Methods: Study design was a retrospective beforeafter study using electronically archived hospital data at an urban, safety net hospital. All adult non-prisoner ED visits from 1/1/2005 to 12/31/2009 were analyzed. An ethanol-related ED visit was defined by ICD-9 codes related to alcohol (291.x, 303.x, 305.0, 980.0 ). An alcohol withdrawal visit was defined by ICD-9 codes of delirium tremens (291.0), alcohol psychosis with hallucination (291.3), and ethanol withdrawal (291.81). We generated a ratio of ethanol-related ED visits to total ED visits (ethanol/total) and ratio of alcohol withdrawal ED visits to total ED visits (withdrawal/total). A day was redefined as 8 AM to 8 AM. The ratios were averaged within the four seasons to account for seasonal variations. Data from summer 2008 were dropped as it spanned the law change. We stratified data into Sunday and non-Sunday days prior to analysis to isolate the effects of the law change. We used multivariable linear regression to estimate the association of the ratio with the law change while adjusting for time and the seasons. Each ratio was modeled separately. The interaction between time and the law change was assessed using p < 0.05. Results: During the study there were a total of 212,189 ED visits including 12,042 (6% of total) ethanol-related visits and 5,496 (3% of total) alcohol withdrawal visits. Unadjusted ratios in seasonal blocks are plotted in the figure with associated 95% CI and best fit regression line for before and after law change, respectively. After adjusting for time and season in the multivariable linear regression, we found no significant association of either ethanol/total or withdrawal/total with the law change. This remained true for both Sunday and non-Sunday data. All interactions assessed were not significant. Conclusion: The change in Colorado law to allow the sale of full-strength alcoholic beverages on Sundays did not significantly affect ethanol-related or alcohol withdrawal ED visits. Background: Olanzapine is a second-generation antipsychotic (SGA) with actions at the serotonin/histamine receptors. Post-marketing reports and a case report have documented dangerous lowering of blood pressure when this antipsychotic is paired with benzodiazepines, but a recent small study found no bigger decreases in blood pressure compared to another antipsychotic like haloperidol. Decreases in oxygen saturations, however, were larger when olanzapine was combined with benzodiazepines in alcohol-intoxicated patients. It is unclear whether these vital sign changes are associated with the intramuscular (IM) route only. Objectives: The assessment of vital signs following administration of either oral (PO) or IM olanzapine, either with or without benzodiazepines (benzos) and with or without concurrent alcohol intoxication. Methods: This is a structured retrospective chart review of all patients who received olanzapine in an academic medical center ED from 2004-2010 who had vital signs documented both before medication administration and within four hours afterwards. Vital signs were calculated as pre-dose minus lowest post-dose vital sign within 4 hours, and were analyzed in an ANOVA with route (IM/PO), benzo use (+/)), and alcohol use (+/)) as factors. Significance level was set to <0.05. Results: There were 482 patients who received olanzapine over the study period. A total of 275 patients (225 PO, 50 IM) met inclusion criteria. Systolic blood pressures decreased across all groups as patients reduced their agitation. Neither the route of administration, concurrent use of benzos, nor the use of alcohol were associated with significant changes in systolic BP (p = ns for all comparisons; see Figure 1 ). Decreases in oxygen saturations, however, were significantly larger for alcoholintoxicated patients who subsequently received IM olanzapine + benzos compared to other groups (route: p < 0.001; alcohol: p < 0.01; route x alcohol: p < 0.001; route x benzos x alcohol: p < 0.05; see Figure 2 ). Conclusion: Alcohol and benzos are not associated with significant decreases in blood pressure after PO olanzapine, but IM olanzapine + benzos is associated with potentially significant oxygen desaturations in patients who are intoxicated. Intoxicated patients may have differential effects with the use of IM SGAs such as olanzapine when combined with benzos, and should be studied separately in drug trials. Patients with a Psychiatric Diagnosis Rasha Buhumaid, Jessica Riley, Janice Blanchard George Washington University, Washington, DC Background: Literature suggests that frequent emergency department (ED) use is common among persons with a mental health diagnosis. Few studies have documented risk factors associated with increased utilization among this population. Objectives: To understand demographic characteristics of frequent users of the emergency department and describe characteristics associated with their visits. It was hypothesized that frequent visitors would have a higher rate of medical comorbidities than infrequent visitors. Methods: This was a retrospective study of patients presenting to an urban, academic emergency department in 2009. A cohort of all patients with a mental health-related final ICD-9 coded diagnosis (Axis I or Axis II) was extracted from the electronic medical record. Using a standard abstraction form, a medical chart review collected information about medical comorbidities, substance abuse, race, age, sex, and insurance coverage, as well as diagnosis, disposition, and time of each visit. Results: Our sample consisted of 109 frequent users ( ‡4 visits in a 365 day period) and 442 infrequent users (£3 visits in a 365 day period). Frequent users were more likely to be male (68% vs. 54.5% p = 0.01), black (86% vs. 59% p < 0.0001), and had a higher average number of comorbid conditions (2.0, 95%CI 1.73,2.26) as compared to infrequent users (1.0, 95%CI 0.90,1.10). A higher percentage of visits in the infrequent user group occurred during the day (49% vs. 38.3% p < 0.0001) while a higher number of visits in the frequent users occurred after midnight (24.3% vs. 16.6% p = 0.0003). Visits in the frequent user group were less likely to be for a psychiatric complaint (34.3% vs. 81.2%) and less likely to result in a psychiatric admission (18.3% versus 56.7%) as compared to the infrequent user group (p < 0.0001). Conclusion: Our data indicate that among patients with psychiatric diagnoses, those who make frequent ED visits have a higher rate of comorbid conditions than infrequent visitors. Despite their increased use of the ED, frequent visitors have a significantly lower psychiatric admission rate. Many of the visits by frequent users are for non-psychiatric complaints and may reflect poor access to outpatient medical and mental health services. Emergency departments should consider interventions to help address social and medical issues among mental health patients who frequently use ED services. Background: The World Health Organization estimates that one million people die annually by suicide. In the U.S., suicide is the fourth leading cause of death between the ages of 10 and 65. Many of these patients are seen in ED, while outpatient visits for depression are also high. No recent analysis has compared these groups in the recent years. Objectives: To determine if there is a relationship between the incidence of suicidal and depressed patients presenting to emergency departments and the incidence of depressed patients presenting to outpatient clinics from 2002-2008. The secondary objective is to analyze trends in suicidal patients in the ED. Methods: We used NHAMCS (National Hospital Ambulatory Medical Care Survey) and NAMCS (National Ambulatory Medical Care Survey), national surveys completed by the Centers for Disease Control, which provide a sampling of emergency department and outpatient visits respectively. For both groups, we used mental-health-related ICD-9-CM, E codes and reasons for visit. We compared suicidal and depressed patients who presented to the ED, to those who presented to outpatient clinics. Our subgroup analyses included age, sex, race/ethnicity, method of payment, regional variation, and urban verses rural distribution. Results: ED visits for depression (1.14%) and suicide attempts (0.49%) remained stable over the years, with no significant linear trend. However, office visits for depression significantly decreased from 3.14% of visits in 2002 to 2.65% of visits in 2008. Non-Latino whites had a higher percentage of ED visits for depression (1.25%) and suicide attempt (0.57%) (p < 0.0001), and a higher percentage of office visits for depression than all other groups. Among patients age 50-69 years, ED visits for suicide attempt significantly increased from 0.12% in 2002 to 0.44% in 2008. Homeless patients had a higher percent of ED visits for depression (6.5%) and suicide attempt ( Background: For potentially high-risk ED patients with psychiatric complaints, efficient ED throughput is key to delivering high-quality care and minimizing time spent in an unsecured waiting room. Objectives: We hypothesized that adding a physician in triage would improve ED throughput for psychiatric patients. We evaluated the relationship between the presence of an ED triage physician and waiting room (WR) time, time to first physician order, time to ED bed assignment, and time spent in an ED bed. Methods: The study was conducted from 11/2009-2/ 2011 at an academic ED with 55000 annual visits and a dedicated on-site emergency psychiatric unit. We performed a pre/post retrospective observational cohort study using administrative data, including weekend visits from noon-10pm, 8 months pre and post addition of weekend triage physicians. After adjusting for patient age, sex, insurance status, emergency severity index score, mode of arrival, ED occupancy rate, WR count, boarding count, and average WR LOS, multiple linear regression evaluated the relationship between the presence of a triage physician and four ED throughput outcomes: time spent in the WR, time to first order, time spent in an ED bed, and the total ED LOS. Results: 565 visits met inclusion criteria, 280 in the 8 months before and 285 in the 8 months after physicians were assigned to triage on weekends. Table 1 reports demographic data; multivariate analysis results are found in Table 2 . The presence of a triage physician was associated with an 8 (95% CI 0.6-15.2) minute increase in WR time and no associated change in time to first order, time spent in an ED bed, or in the overall ED LOS. Conclusion: Use of triage physicians has been reported to decrease the time patients spend in an ED bed and improve ED throughput. However, for patients with psychiatric complaints, our analysis revealed a slight increase in WR time without evident change in the time to first order, time spent in an ED bed, or total ED LOS. Improvements in ED throughput for psychiatric patients will likely require system-level changes, such as reducing ED boarding and improving lab efficiency to speed the process of medical clearance and reduce time spent in the unsecured WR. These findings may not be generalizable to EDs without a dedicated ED psychiatric unit with full-time social workers to assist with disposition. Initial assessment included CIWA scoring, repeated hourly, as well as other variables (see Table 1 ). Treatment and admission to the inpatient hospital was indicated for a CIWA score of 10 or higher. Statistical analysis was performed utilizing repeated measures general linear modeling for CIWA scores and ANOVA for all other variables. Results: There were 123 patients who fit the inclusion criteria, with 9 admitted for treatment at initial intake and another 27 admitted during the following 10 hours. The table below compares the three most prevalent ethnic populations seen at our hospital. Native Americans presented at a significantly younger age (p < 0.05) than the other two ethnicities. Initial CIWA scores taken on admission were significantly lower in the Native American group than the other two groups (p < 0.05) and at 1 hour a difference existed but failed to reach significance. Repeated measures analysis indicate that CIWA scores progressed in a U-shaped curvilinear fashion (See Figure 1 ) Conclusion: Initial assessment utilizing CIWA scores appears to be affected by ethnicity. Care must be taken when assessing and making decisions on a single initial CIWA score. Further research is needed in this area as our numbers are small and differences might be seen in subsequent scoring. In addition, our study consists of primarily male patients and does not include African-American patients. Background: Age is a risk factor for adverse outcomes in trauma, yet evidence supporting the use of specific age cut-points to identify seriously injured patients for field triage is limited. Objectives: To evaluate under-triage by age, empirically examine the association between age and serious injury for field triage, and assess the potential effect of mandatory age criteria. Methods: This was a retrospective cohort study of injured children and adults transported by 48 EMS agencies to 105 hospitals in 6 regions of the Western U.S. from 2006-2008. Hospital records were probabilistically linked to EMS records using trauma registries, emergency department data, and state discharge databases. Serious injury was defined as an Injury Severity Score (ISS) ‡16 (the primary outcome). We assessed under-triage (triage-negative patients with ISS ‡16) by age decile, different mandatory age criteria, and used multivariable logistic regression models to test the association (linear and non-linear) between age and ISS ‡ 16, adjusted for important confounders. Results: 260,027 injured patients were evaluated and transported by EMS over the 3-year period. Under-triage increased markedly for patients over 60 years, reaching 58% for those over 90 years ( Figure 1 ). Mandatory age triage criteria decreased under-triage, while substantially increasing over-triage: one ISS ‡ 16 patient identified for every 65 additional patients triaged to major trauma centers. Among patients not identified by other criteria, age had a strong non-linear association with ISS ‡ 16 (p < 0.01); the probability of serious injury steadily increased after 30 years, becoming more notable after 60 years ( Figure 2 ). Conclusion: Under-triage in trauma increases in patients over 60 years, which may be reduced with mandatory age criteria at the expense of system efficiency. Among patients not identified by other criteria, serious injury steadily increased after 30 years, though there was no age at which risk abruptly increased. Background: Although limited resuscitation with hemoglobin-based oxygen carriers (HBOCs) improves survival in several polytrauma models, including those of traumatic brain injury (TBI) with uncontrolled hemorrhage (UH) via liver injury, their use remains controversial. Objectives: We examine the effect of HBOC resuscitation in a swine polytrauma model with UH by aortic tear +/) TBI. We hypothesize that limited resuscitation with HBOC would offer no survival benefit and would have similar effects in a model of UH via aortic tear +/) TBI. Methods: Anesthetized swine subjected to UH inflicted via aortic tear +/) fluid percussion TBI underwent equivalent limited resuscitation with HBOC, LR, or HBOC+nitroglycerin (NTG) (vasoattenuated HBOC) and were observed for 6 hours. Comparisons were between TBI and no-TBI groups with adjustment for resuscitation fluid type using two-way ANOVA with interaction and Tukey Kramer adjustment for individual comparisons. Results: There was no independent effect of TBI on survival time after adjustment for fluid type (ANOVA, TBI term p = 0.59) and there was no interaction between TBI and resuscitation fluid type (ANOVA interaction term p = 0.12). There was a significant independent effect of fluid type on survival time (ANOVA p = 0.005 Background: Intracranial hemorrhage (ICH) after a head trauma is a problem frequently encountered in the ED. An elevated INR is recognized as a risk of bleeding. However, in a patient with an INR in normal range, a level associated with a lower risk of ICH is not known. Objectives: The aim of this study was to identify an INR threshold that could predict a decreased risk of an ICH after a head trauma in patients with a normal INR. It is hypothesized that there is a threshold at which the likelihood of bleeding decreases significantly. Methods: We did a study using data from a registry of patients with mild to severe head trauma (N = 3356) evaluated in a Level I trauma center in Canada between March 2008 and February 2011. All the patients with a documented scan interpreted by a radiologist and a normal INR, defined as a value less then 1.6, were included. We determined the correlation between INR value binned by 0.1 and the proportion of patients with an ICH. Threshold was defined by consensus as an abrupt change of more than 10% in the percentage of patients with ICH. Univariate frequency distribution was tested with Pearson's chisquare test. Logistic regression analysis was then used to study the effects of INR on ICH with the following confounding factors: age, sex, and intake of warfarin, clopidogrel, or aspirin. Results are presented with 95% confidence intervals. Results: 751 patients met the inclusion criteria. The mean age was 55.3 years ± 29.9 and 65% were men. 267 patients (35.6%) had an ICH on brain scan. We found a significantly lower risk of ICH at a threshold of INR less than 1.0 (p < 0.001, univariate OR = 0.37, 95%CI 0.25-0.54) and a strong correlation between the risk of bleeding for every increase of the INR (R 2 = 0.8987). In fact, after adjustment for confounding variables, every 0.1 INR increase was associated with an increased risk of having an ICH (OR 1.50; 95% CI 1.31-1.72). Conclusion: We were able to demonstrate an INR threshold under which the probability of ICH was significantly lower. We also found a strong association between the risk of bleeding and the increase in INR within a normal range, suggesting that clinicians should not be falsely reassured by a normal INR. Our results are limited by the fact that this is a retrospective study and a small proportion of traumatic brain injured patients in our database had no scan or INR at their ED visit. A prospective cohort study would be needed to confirm our results. Background: Increasingly, patients with TBI are being seen and managed in the emergency neurology setting. Knowing which early signs are associated with prognosis can be helpful in directing the acute management. Objectives: To determine whether any factors early in the course of head trauma are associated with shortterm outcomes including inpatient admission, in-hospital mortality, and return to the hospital within 30 days. Methods: This IRB-approved study is a retrospective review of patients head injury presenting to our tertiary care academic medical center during a 9-month period. The dataset was created using RedCap, a data management solution hosted by our medical school's Center for Translational Science Institute. Results: The median age of the cohort (n = 500) was 26, IQR = 15-48yrs, with 62% being male. 84% had a GCS of 13-15 (mild TBI), 3% 9-13 (moderate TBI), and 13% GCS < 8 (severe TBI). 39% of patients were admitted to the hospital. The median length of hospital stay was 2 days, with an IQR of 1-5 days. Of those admitted, 53% had an ICU stay as well. The median ICU LOS was also 2 days, with an IQR of 1-6days. Twenty nine (6%) patients died during their hospital stay. Lower GCS was predictive of inpatient admission (P = 0.0003) as well as ICU days (P < 0.0001). Significant predictors of re-admission to the hospital within 30 days included hypotension (P = 0.002) upon initial presentation. The prehospital and ED GCS scores were not statistically significant. Significant predictors of in-hospital death in a model controlling for age included bradycardia (P = 0.0042), hyperglycemia (P = 0.0040), and lower GCS (P = 0.0003). The incidence of bradycardia (HR < 60) was 4.4%. Conclusion: Early hypotension, hyperglycemia, and bradycardia along with lower initial GCS are associated with significantly higher likelihood of hospital admission, including ICU admission, as well as intrahospital death and re-admission. Background: Over 23,000 people per day require treatment for ankle sprains, resulting in lost workdays and training for athletes. Platelet rich plasma (PRP) is an autologous concentration of platelets which, when injected into the site of injury, is thought to improve healing by promoting inflammation through growth factor and cytokine release. Studies to date have shown mixed results, with few randomized or placebo-controlled trials. The Lower Extremity Functional Scale (LEFS) is a previously validated objective measure of lower extremity function. Objectives: Is PRP helpful in acute ankle sprains in the the emergency department? Methods: Prospective, randomized, double-blinded, placebo-controlled trial. Patients with severe ankle sprains and negative x-rays were randomized to trial or placebo. Severe was defined as marked swelling and ecchymosis and inability to bear weight. Both groups had 50 cc of blood drawn. Trial group blood was centrifuged with a Magellan Autologous Platelet Separator (Arteriocyte, Cleveland) to yield 3-4 cc of PRP. PRP along with 0.5 cc of 1% lidocaine and 0.5 cc of 0.25% bupivicaine was injected at the point of maximum tenderness by a blinded physician under ultrasound guidance. Control group blood was discarded and participants were injected in a similar fashion substituting sterile 0.9% saline for PRP. Both groups had visual analog scale (VAS) pain scores and LEFS on days 0, 3, 8, and 30. All participants had a posterior splint and were made non weight bearing for 3 days after which they were reexamined, had their splint removed, and were asked to bear weight as tolerated. Participants were instructed not to use NSAIDS during the trial. Results: 1156 patients were screened and 37 were enrolled. Four withdrew before PRP injection was complete. Eighteen were randomized to PRP and 15 to placebo. See tables for results. VAS and LEFS are presented as means with SD in parentheses. Demographics were not statistically different between groups. Conclusion: In this small study, PRP did not appear to offer benefit in either pain control or healing. Both groups had improvement in their pain and functionality and did not differ significantly during the study period. Limitations include small study size and large number of participant refusals. Methods: A structured chart review of all ICD-9 radius fracture coded charts spanning March 18, 2010 to July 17, 2011 was conducted. Specific variable data were collected and categorized as follows: age, MOI, body mass index, and fracture location. The charts were reviewed by two medical students, with 10% of the charts reviewed by both students to confirm inter-rater reliability. Frequencies and inter-quartile ranges were determined. Comparisons were made with Fisher's exact test and multiple logistic regression. Results: 187 charts met inclusion criteria. 46 charts were excluded due to one of the following reasons: no fracture or no x-ray (14), isolated ulnar fracture (19), or undocumented or penetrating MOI (13). Of the analyzed patients (n = 141), distal radius fractures were most common (66%), followed by proximal (32%) and midshaft (2%). Chart reviewers were found to be reliable (j = 1). Age and MOI were significantly associated with fracture location (see table) . Ages 18-54 and bike accidents were more strongly associated with proximal radius fractures (odds ratio: 12 [2-94] and 5 [2-13], respectively). Conclusion: Patients presenting to our inner city ED with a radius fracture are more likely to have a distal fracture. Adults 18-54 and bike accidents had a significantly higher incidence of proximal fractures than other ages or MOIs. Background: Trauma centers use guidelines to determine the need for a trauma surgeon in the ED on patient arrival. A decision rule from Loma Linda University that includes penetrating injury and tachycardia was developed to predict which pediatric trauma patients require emergent intervention, and thus are most likely to benefit from surgical presence in the ED. Objectives: Our goal was to validate the Loma Linda Rule (LLR) in a heterogeneous pediatric trauma population and to compare it to the American College of Surgeons' Major Resuscitation Criteria (MRC). We hypothesized that the LLR would be more sensitive than the MRC for identifying the need for emergent operative or procedural intervention. Methods: We performed a secondary analysis of prospectively collected trauma registry data from two urban Level I pediatric trauma centers with a combined annual census of approximately 115,000 visits. Consecutive patients <15 years old with blunt or penetrating trauma from 1993 through 2010 were included. Patient demographics, injury severity scores (ISS), times of ED arrival and surgical intervention, and all variables of both rules were obtained. The outcome (emergent operative intervention within 1 hour of ED arrival or ED cricothyroidotomy or thoracotomy) was confirmed by trained, blinded abstractors. Sensitivities, specificities, and 95% confidence intervals (CIs) were calculated for both rules. Results: 8,079 patients were included with a median age of 5.9 years and a median ISS of 9. Emergent intervention was required in 51 patients (0.6%). The LLR had a sensitivity ranging from 59.4%-59.7% (95% CI: 25.9%-93.5%) and specificity ranging from 49.5%-86.5% (95% CI: 21.6%-82.1%) between both institutions. The MRC had a sensitivity ranging from 73.6%-81.6% (95% CI: 54.7%-95.1%) and specificity ranging from 69.4%-84.7% (95% CI: 54.7%-90.1%) between institutions. Conclusion: Emergent intervention is rare in pediatric trauma patients. The MRC was more sensitive for predicting the need for emergent intervention than the LLR. Neither set of criteria was sufficiently accurate to recommend their routine use for pediatric trauma patients. Droperidol for Sedation of Acute Behavioural Disturbance Leonie A. Calver 1 , Colin Page 2 , Michael Downes 3 , Betty Chan 4 , Geoffrey K. Isbister 1 1 Calvary Mater Newcastle and University of Newcastle, Newcastle, Australia; 2 Princess Alexandra Hospital, Brisbane, Australia; 3 Calvary Mater Newcastle, Newcastle, Australia; 4 Prince of Wales Hospital, Sydney, Australia Background: Acute behavioural disturbance (ABD) is a common occurrence in the emergency department (ED) and is a risk to staff and patients. There remains little consensus on the most effective drug for sedation of violent and aggressive patients. Prior to the Food and Drug Administration's black box warning, droperidol was commonly used and was considered safe and effective. Objectives: This study aimed to investigate the effectiveness of parenteral droperidol for sedation of ABD. Methods: As part of a prospective observational study, a standardised protocol using droperidol for the seda-acute and delayed behavioral deficits were demonstrated in this rat model of CO toxicity, which parallels the neurocognitive deficit pattern observed in humans (see figure) . Similar to prior studies, pathologic analysis of brain tissue demonstrated the highest percentage of necrotic cells in the cortex, pyramidal cells, and cerebellum. The collected data are summarized in the table. We have developed an animal model of severe CO toxicity evidenced by behavioral deficits and neuronal necrosis. Future efforts will compare neurologic outcomes in severely CO poisoned rats treated with hypothermia and 100% inspired O2 versus HBO to normothermic controls treated with 100% inspired O2. increasing in popularity, attracting more than 70,000 annual participants worldwide. Prior studies have consistently documented renal function impairment, but only after race completion. The incidence of renal injury during these multi-day ultramarathons is currently unknown. This is the first prospective cohort study to evaluate the incidence of acute kidney injury (AKI) in runners during a multi-day ultramarathon foot race. Objectives: To assess the effect of inter-stage recovery versus cumulative damage on resulting renal function during a multi-day ultramarathon. Methods: Demographic and biochemical data gathered via phlebotomy and analyzed by iSTATÒ (Abbott, NJ) were collected at the start and finish of Day 1 (25 miles), 3 (75 miles), and 5 (140 miles) during Racing The Planet'sÒ150-mile, 7-day self-supported desert ultramarathons. Pre-established RIFLE criteria using creatinine (Cr) and glomerular filtration rate (GFR) defined AKI as ''No Injury'' (Cr <1.5x normal, decrease of GFR <25%), ''Risk'' (Cr 1.5x normal, decrease of GFR by 25-49%), and ''Injury'' (Cr 2x normal, decrease of GFR by 50-75%). Results: Thirty racers (76% male) with a mean (+/) sd) age of 39 + /-10 years were studied during the 2008 Sahara (n = 7, 23.3%), 2008 Gobi (n = 10, 33%), and 2009 Namibia (n = 13, 43.3%) events. The average decrease in GFR from Day 1 start to Day 1 finish was 28 + /-25 (p < 0.001, 95% CI 18.5-37.6); Day 1 start to Day 3 finish was 29.6 + /-20.1 (p < 0.001, 95% CI 18.4-40.7); and Day 1 start to Day 5 finish was 30.9 ± 17.5 (p < 0.001, 95% CI 20.8-41). Runners categorized as Risk and Injury for AKI after Stage 1 was 44.8 % and 10%; after Stage 3 was 67% and 13%, and after Stage 5 was 57.1% and 7.1% Conclusion: The majority of participants developed significant levels of renal impairment despite recovery intervals. Given the changes in renal function, potentially harmful non-steroidal anti-inflammatory drugs should be minimized to prevent exacerbating acute kidney injury. Background: More than 10% of the elderly abuse prescription drugs, and emergency medicine providers frequently struggle to identify features of opioid addiction in this population. The Prescription Drug Use Questionnaire (PDUQp) is a validated, 42-item, patient-administered tool developed to help health care providers better identify problematic opioid use, or dependence, in patients who receive opioids for the treatment of chronic pain. Objectives: To identify the prevalence of prescription drug misuse features in elderly ED patients. Methods: This cross-sectional, observational study was conducted between 07/2011 and 08/2011 in the ED of an urban, university-affiliated community hospi-tal that serves a large geriatric population. All patients aged 65 to 89 inclusive were eligible, and were recruited on a convenience basis. Exclusion criteria included known dementia, and critical illness. Outcomes of interest included self-reported history of prior prescription opioid use, substance abuse history, aberrant medication-taking behaviors, and PDUQp results. Results: One hundred patients were approached for participation. Two were excluded for inability to read English, three were receiving analgesia for metastatic cancer, 28 had never taken a prescription opioid, and seven refused to participate beyond pre-screening. Sixty patients completed the study (see table 1 ). Of those, 13.3% reported four or more visits within 12 months; chronic pain was reported by 56.7%; debilitating pain by 55.9%; prior pain management referral by 18.3%; and storing opioids for future use by 30%. Seventeen patients reported current prescription opioid use, and were administered the PDUQp (see figure) . In this population, 47.1% thought their pain was not adequately being treated; 41.2% reported having to increase the amount of pain medication they were taking over the prior 6 months; 35.3% saved up future pain medication; 11.8% had doctors refuse to give them pain medication for fear that the patient would abuse the prescription opioids; and 29.4% reported having a previous drug or alcohol problem. Conclusion: Screening instruments, such as the PDUQp, facilitate identification of geriatric patients with features of opioid misuse. A high proportion of patients in this study save opioids for further use. Interventions for safe medication disposal may decrease access to opioids and subsequent morbidity. Age extremes, male sex, and several chronic health conditions were associated with increased odds of heat stroke, hospital admission, and death in the ED by a factor of 2-3. Chronic hematologic disease (e.g. anemia) was associated with a 10-12 fold increase in adjusted odds of each of these outcomes. Conclusion: HRI imposes a substantial public health burden, and a wider range of chronic conditions confer susceptibility than previously thought. Males, older adults, and patients with chronic conditions, particularly anemia, are likely to have more severe HRI, be admitted, or die in the ED. Background: Carbon monoxide (CO) poisoning is a remarkable cause of death worldwide. CO, produced by the incomplete combustion of hydrocarbons, has many toxic effects on especially the heart and brain. CO binds strongly to cytochrome oxidase, hemoglobin, and myoglobin causing hypoxia of organs and issues. CO converts hemoglobin to carboxyhemoglobin and makes transport of oxygen through the body impossible and causes severe hypoxia. Objectives: The aim of this study is to investigate the levels of S100b and neuron specific enolase (NSE) measured both during admittance and at the sixth hour of hyperbaric and normobaric oxygen therapy carried out on patients with a diagnosis of CO poisoning. Methods: The study is designed as a prospective observational laboratory study. Forty patients were enrolled in the study: 20 underwent normobaric oxygen therapy (NBOT) and the other 20 underwent hyperbaric oxygen therapy (HBOT). Levels of S100b and NSE were measured both during admittance and at the sixth hour of admittance of all patients. Demographic data, clinical characteristics, and outcome measures were recorded. All data were statistically analyzed. Results: In both treatment groups, mean levels of NSE after therapy were significantly lower than admittance levels. Although levels of NSE measured before and 6 hours after treatment in HBOT group were high, the difference between groups was not statistically significant (p > 0.05). In both treatment groups, mean levels of S100b after therapy were significantly lower than admittance levels; likewise NSE. Although levels of S100b measured before and 6 hours after treatment in HBOT group were high, the difference between groups was not statistically significant (p > 0.05). Additionally, while levels of S100b measured after treatment in the HBOT group were lower compared to the NBOT group, the difference between groups was also not statistically significant (p > 0.05). Conclusion: Levels of S100b and NSE as evidence for brain injury elevation in case of CO poisoining and decrease by therapy according to our study as well as previous studies. Decrease in levels of S100b is more significant. According to our results, S100b and NSE may be useful markers in case of CO poisoning; however, we did not meet any data providing more value in determining HBOT indications and determining levels of COHb in the management of patients with a diagnosis of CO poisoining. Neurons Objectives: This study was conducted to determine if neurons in the DMH, and its neighbor the paraventricular hypothalamus (PVN), were likewise involved in MDMA-mediated neuroendocrine responses, and if serotonin 1a receptors (5-HT1a) play a role in this regional response. Methods: In both experiments, male Sprague Dawley rats (n = 5-12/group) were implanted with bilateral cannulas targeting specific regions of the brain, i.v. catheters for drug delivery, and i.a. catheters for blood withdrawal. Experiments were conducted in Raturn cages, which allow blood withdrawal and drug administration in free moving animals while recording their locomotion. In the first experiment, rats were microinjected into the DMH, the PVN, or a region between, with the GABAa agonist muscimol (80 pmol/100nl/side) or PBS (100nl) and 5 min later were injected with either MDMA (7.5 mg/kg i.v.) or an equal volume of saline. Blood was withdrawn prior to microinjections and 15 minutes after MDMA for RIA measurement of plasma ACTH. Locomotion was recorded throughout the experiment. In a separate experiment of identical design, either the 5-HT1a antagonist WAY 100635 (WAY, 5 nmol/100 nl/side) or saline was microinjected followed by i.v. injection of MDMA or saline. In both experiments, increases in ACTH and distance traveled were compared between groups using an ANOVA analysis. Results: When compared to controls, microinjections of muscimol into the DMH, PVN, or the area in between attenuated plasma increases in ACTH and locomotion evoked by MDMA. When microinjected into the DMH or PVN, WAY had no effect on ACTH, but when injected into the region of the DMH it significantly increased locomotion. Background: Poor hand-offs between physicians when admitting patients have been shown to be a major source of medical errors. Objectives: We propose that training in a standardized admissions protocol by emergency medicine (EM) to internal medicine (IM) residents would improve the quality of and quantity of communication of vital patient information. Methods: EM and IM residents at a large academic center developed an evidence-based admission handover protocol termed the '7Ps' (Table 1) . EM and IM residents received '7Ps' protocol training. IM residents recorded prospectively how well each of the seven Ps were communicated during each admission pre-and post-intervention. IM residents also assessed the overall quality of the handover using a Likert scale. The primary outcome was the change in the number of 'Ps' conveyed by the EM resident to the accepting IM resident. Data were collected for six weeks before and then for six weeks starting two weeks after the educational intervention. Results: There were 78 observations recorded in the preintervention (control) group and 48 observations in the post-intervention group. For each of the seven 'Ps' the percentage of observation where all of the information was communicated is shown in Table 2 . The communication of 'Ps' increased following the intervention. This rise was statistically significant for patient information and pending tests. In the control group the mean of total communicated Ps was 5 and in the intervention group, the mean increased to 6 (p < 0.005). The quality of the handover communication had a mean rating of 3.9 in the control group and 4.3 in the intervention group (p < 0.05). Conclusion: This educational intervention in a cohort of EM and IM residents improved the quality and quantity of vital information communicated during patient handovers. The intervention was statistically significant for patient information transfer and tests pending. The results are limited by study size. Based on our preliminary data, an agreed-upon handover protocol with training improved the amount and quality of communication during patients' hospital admission on simple items that were likely had been taken for granted as routinely transmitted. We recruited a convenience sample of residents and students rotating in the pediatric emergency department. A two-sided form had the same seven clinical decisions on each side: whether to perform blood, urine, spinal fluid tests, imaging, IV fluids, antibiotics, or a consult. The rating choices were: Definitely Not, Probably Not, Probably Would, and Definitely Would. Trainees rated each decision after seeing a patient, but before presenting to the preceptor, who, after evaluating the patient, rated the same seven decisions on the second side of the form. The preceptor also indicated the most relevant decision (MRD) for that patient. We examined the validity of the technique using hypothesis testing; we posited that residents would have a higher degree of concordance with the preceptor than would medical students. This was tested using dichotomized analyses (accuracy, kappa) and ROC curves with the preceptor decision as the gold standard. Results: Thirty-one students completed 130 forms (median 4 forms; IQR 2,6) and 23 residents completed 206 (6; IQR 3,12). Preceptors included 24 attending physicians and 3 fellows (9; IQR 4, 21). Students were concordant with preceptors in 70% (k = 0.38) of MRD while residents agreed in 79.6% (p = 0.045), k = 0.59. ROC analysis revealed significant differences between students and residents in the AUC for the MRD (0.84 vs 0.72; p = 0.03). Conclusion: This measure of trainee-preceptor concordance requires further research but may eventually allow for assessment of trainee clinical decision-making. It also has the pedagogical advantage of promoting independent trainee decision-making. Background: Basic Life Support (BLS) and Advanced Cardiac Life Support (ACLS) are integral parts of emergency cardiac care. This training is usually reserved in most institutions for residents and faculty. The argument can be made to introduce BLS and ACLS training earlier in the medical student curriculum to enhance acquisition of these skills. Objectives: The goal of the survey was to characterize the perceptions and needs of graduating medical students in regards to BLS and ACLS training. Methods: This was a survey-based study of graduating fourth year medical students at a U.S. medical school. The students were surveyed before voluntarily participating in a student-led ACLS course in March of their final year. The surveys were distributed before starting the training course. Both BLS and ACLS training, comfort levels, and perceptions were assessed in the survey. Results: Of the 182 students in the graduating class, 152 participated in the training class with 109 (72%) completing the survey. 50% of students entered medical school without any prior training and 49% started clinics without training. 83.5% of students reported witnessing an average of 3.0 in-hospital cardiac arrests during training (range of 0-20). Overall, students rated their preparedness 2.0 (SD 1.0) for adult resuscitations on a 1-5 Likert scale with 1 being the unprepared. 98% and 92% of students believe that BLS and ACLS should be included in the medical student curriculum respectively with a preference for teaching before starting clerkships. 36% of students avoided participating in resuscitations due to lack of training. Of those, 95% said they would have participated had they been trained. Conclusion: To our knowledge, this is one of the first studies to address the perceptions and needs for BLS and ACLS training in U.S. medical schools. Students feel that BLS and ACLS training is needed in their curriculum and would possibly enhance perceived comfort levels and willingness to participate in resuscitations. Background: Professionalism is one of six core competency requirements of the ACGME, yet defining and teaching its principles remains a challenge. The ''social contract'' between physician and community is clearly central to professionalism so determining the patient's understanding of the physician's role in the relationship is important. Because specialization has created more narrowly focused and often quite different interactions in different medical environments, the patient concept of professionalism in different settings may vary as well. Objectives: We hoped to determine if patients have different conceptions of professionalism when considering physicians in different clinical environments. Methods: Patients were surveyed in the waiting room of an emergency department, an outpatient internal medicine clinic, and a pre-operative/anesthesia clinic. The survey contained 18 examples of attributes, derived from the American Board of Internal Medicine's eight characteristics of professionalism. Participants were asked to rate, on a 10-point scale, the importance that a physician possess each attribute. An ANOVA analysis was used to compare the sites for each question. Results: Of 604 who took the survey, 200 were in the emergency department, 202 were in the medicine clinic, and 202 were in the pre-operative clinic. Females comprised 56% of the study group and the average age was 49 with a range from 18 to 94. There was a significant difference on the attribute of ''providing a portion of work for those who cannot pay;'' this was rated higher in the emergency department (p = 0.003). There was near-significance (p = 0.05) on the attribute of ''being able to make difficult decisions under pressure,'' which was rated higher in the pre-op clinic. There was no difference for any of the other questions. The top four professional attributes at each clinical site were the same -''honesty,'' ''excellence in communication and listening,'' ''taking full responsibility for mistakes,'' and ''technical competence/ skill;'' the bottom two were ''being an active leader in the community'' and ''patient concerns should come before a doctor's family commitments.'' Conclusion: Very few differences between clinical sites were found when surveying patient perception of the important elements of medical professionalism. This may suggests a core set of values desired by patients for physicians across specialties. Emergency Medicine Faculty Knowledge of and Confidence in Giving Feedback on the ACGME Core Competencies Todd Guth, Jeff Druck, Jason Hoppe, Britney Anderson University of Colorado, Aurora, CO Background: The ACGME mandates that residency programs assess residents based upon six core competencies. Although the core competencies have been in place for a number of years, many faculty are not familiar with the intricacies of the competencies and have difficulty giving competency-specific feedback to residents. Objectives: The purpose of the study is to determine the extent to which emergency medicine (EM) faculty can identify the ACGME core competencies correctly and to determine faculty confidence with giving general feedback and core competency focused feedback to EM residents. Methods: Design and Participants: At a single department of EM, a survey of twenty-eight faculty members, their knowledge of the ACGME core competencies, and their confidence in providing feedback to residents was conducted. Confidence levels in giving feedback were scored on a Likert scale from 1 to 5. Observations: Descriptive statistics of faculty confidence in giving feedback, identification of professional areas of interest, and identification of the ACGME core competencies were determined. Mann-Whitney U tests were used to make comparisons between groups of faculty given the small sample size of the respondents. Results: There was a 100% response rate of the 28 faculty members surveyed. Eight faculty members identified themselves as primarily focused on education. Although those faculty members identifying themselves as focused on education scored higher than non-education focused faculty for all type of feedback (general feedback, constructive feedback, negative feedback), there was only a statistical difference in confidence levels 4.57 versus 2.65 (p < 0.002) for ACGME core competency specific feedback when compared to noneducation focused faculty. While education focused faculty correctly identified all six of ACGME core competencies 94% of the time, not one of the non-education focused faculty identified all six of the core competencies correctly. Non-education focused faculty only correctly identified three or more competencies 25% of the time. Conclusion: If residency programs are to assess residents using the six ACGME core competencies, additional faculty development specific to the core competencies will be needed to train all faculty on the core competencies and on how to give core competency specific feedback to EM residents. There is no clear consensus as to the most effective tool to measure resident competency in emergency ultrasound. Objectives: To determine the relationship between the number of scans and scores on image recognition, image acquisition, and cognitive skills as measured by an objective structured clinical exam (OSCE) and written exam. Secondarily, to determine whether image acquisition, image recognition, and cognitive knowledge require separate evaluation methodologies. Methods: This was a prospective observational study in an urban Level I ED with a 3-year ACGME-accredited residency program. All residents underwent an ultrasound introductory course and a one-month ultrasound rotation during their first and second years. Each resident received a written exam and OSCE to assess psychomotor and cognitive skills. The OSCE had two components: (1) recognition of 22 images, and (2) acquisition of images. A Registered Diagnostic Medical Sonographer (RDMS)-certified physician observed each bedside examination. A pre-existing residency ultrasound database was used to collect data about number of scans. Pearson correlation coefficients were calculated for number of scans, written exam score, image recognition, and image acquisition scores on the OSCE. Results: Twenty-nine residents were enrolled from March 2010 to February 2011 who performed an average of 247 scans (range 118-617). There was no significant correlation between number of scans and written exam scores. An analysis of the number of scans and the OCSE found a moderate correlation with image acquisition (r = 0.42, p = 0.029) and image recognition (r = 0.61, p = <0.01)). Pearson correlation analysis between the image acquisition score and image recognition score found that there was no correlation (r = 0.175, p = 0.383). There was a moderate correlation with image acquisition scores to written scores (r = 0.541, p = 0.025) and image recognition scores to written scores (r = 0.596, p = 0.019). Conclusion: The number of scans does not correlate with written tests but has a moderate correlation with image acquisition and image recognition. This suggests that resident education should include cognitive instruction in addition to scan numbers. We conclude that multiple methods are necessary to examine resident ultrasound competency. Background: Although emergency physicians must often make rapid decisions that incorporate their interpretation of an ECG, there is no evidence-based description of ECG interpretation competencies for emergency medicine (EM) trainees. The first step in defining these competencies is to develop a prioritized list of ECG findings relevant to EM contexts. Objectives: The purpose of this study was to categorize the importance of various ECG diagnoses and/or findings for the EM trainee. Methods: We developed an extensive list of potentially important ECG diagnoses identified through a detailed review of the cardiology and EM literature. We then conducted a three-round Delphi expert opinion-soliciting process where participants used a five-point Likert scale to rate the importance of each diagnosis for EM trainees. Consensus was defined as a minimum of 75 percent agreement on any particular diagnosis at the second round or later. In the absence of consensus, stability was defined as a shift of 20 percent or less after successive rounds. Results: Twenty-two EM experts participated in the Delphi process, sixteen (72%) of whom completed the process. Of those, fifteen were experts from eleven different EM training programs across Canada and one was a recognized expert in EM electrocardiography. Overall, 77 diagnoses reached consensus, 42 achieved stability, and one diagnosis achieved neither consensus nor stability. Out of 120 potentially important ECG diagnoses, 53 (43%) were considered ''Must know'' diagnoses, 62 (51%) ''Should know'' diagnoses, and 7 (6%) ''Nice to know'' diagnoses. Conclusion: We have categorized ECG diagnoses within an EM training context, knowledge of which may allow clinical EM teachers to establish educational priorities. This categorization will also facilitate the development of an educational framework to establish EM trainee competency in ECG interpretation. ''Rolling Refreshers Background: Cardiac arrest survival rates are low despite advances in cardiopulmonary resuscitation. High quality CPR has been shown to impart greater cardiac arrest survival; however, retention of basic CPR skills by health care providers has been shown to be poor. Objectives: To evaluate practitioner acceptance of an in-service CPR skills refresher program, and to assess for operator response to real-time feedback during refreshers. Methods: We prospectively evaluated a ''Rolling Refresher'' in-service program at an academic medical center. This program is a proctored CPR practice session using a mannequin and CPR-sensing defibrillator that provides real-time CPR quality feedback. Subjects were basic life support-trained providers who were engaged in clinical care at the time of enrollment. Subjects were asked to perform two minutes of chest compressions (CCs) using the feedback system. CCs could be terminated when the subject had completed approximately 30 seconds of compressions with <3 corrective prompts. A survey was then completed by to obtain feedback regarding the perceived efficacy of this training model. CPR quality was then evaluated using custom analysis software to determine the percent of CC adequacy in 30-second intervals. Results: Enrollment included 88 subjects from the emergency department and critical care units (55 nurses, 17 physicians, 16 students and allied health professionals). All participants completed a survey and 61 CPR performance data logs were obtained. Positive impressions of the in-service program were registered by 81% (71/88) and 74% (65/88) reported a self-perceived improvement in skills confidence. Eighty-three percent (73/88) of respondents felt comfortable performing this refresher during a clinical shift. Thirtynine percent (24/61) of episodes exhibited adequate CC performance with approximately 30 seconds of CC. Of the remaining 37 episodes, 71.1 ± 29.2% of CC were adequate in the first 30 seconds with 80.1 ± 28.6% of CC adequate during the last 30 second interval (p = 0.1847). Of these 37 individuals, 30 improved or had no change in their CPR skills, and 7 individuals skills declined during CC performance (p = 0.007). Conclusion: Implementation of a bedside CPR skill refresher program is feasible and is well received by hospital staff. Real time CPR feedback improved upon CPR skill performance during the in-service session. Teaching Emergency Medicine Skills: Is A Self-directed, Independent, Online Curriculum The Way Of The Future? Tighe Crombie, Jason R. Frank, Stephen Noseworthy, Richard Gerein, A. Curtis Lee University of Ottawa, Ottawa, ON, Canada Background: Procedural competence is critical to emergency medicine, but the ideal instructional method to acquire these skills is not clear. Previous studies have demonstrated that online tutorials have the potential to be as effective as didactic sessions at teaching specific procedural skills. Objectives: We studied whether a novel online curriculum teaching pediatric intraosseus (IO) line insertion to novice learners is as effective as a traditional classroom curriculum in imparting procedural competence. Methods: We conducted a randomized controlled educational trial of two methods of teaching IO skills. Preclinical medical students with no past IO experience completed a written test and were randomized to either an online or classroom curriculum. The online group (OG) were given password-protected access to a website and instructed to spend 30 minutes with the material while the didactic group (DG) attended a lecture of similar duration. Participants then attended a 30-minute unsupervised manikin practice session on a separate day without any further instruction. A videotaped objective structured clinical examination (OSCE) and post-course written test were completed immediately following this practice session. Finally, participants were crossed over into the alternate curriculum and were asked to complete a satisfaction survey that compared the two curricula. Results were compared with a paired t-test for written scores and an independent t-test for OSCE scores. Results: Sixteen students completed the study. Pre-course test scores of the two groups were not significantly different prior to accessing their respective curricula (mean scores of 32% for OG and 34% for DG, respectively; p > 0.05). Post-course written scores were also not significantly different (both with means of 76%; p > 0.05); however, for the post-treatment OSCE scores, the OG group scored significantly higher than the DG group (mean scores of 92.6% and 88.1%; t(14) = 1.76, p < 0.05.) Conclusion: This novel online curriculum was superior to a traditional didactic approach to teaching pediatric IO line insertion. Novice learners assigned to a selfdirected online curriculum were able to perform an emergency procedural skill to a high level of performance. EM educators should consider adopting online teaching of procedural skills. Background: Applicants to EM residency programs obtain information largely from the internet. Curricular information is available from a program's website (PW) or the SAEM residency directory (SD). We hypothesize that there is variation between these key sources. Objectives: To identify discrepancies between each PW and SD. To describe components of PGY1-3 EM residency programs' curricula as advertised on the internet. Methods: PGY1-3 residencies were identified through the SD. Data were abstracted from individual SD and PW pages identifying pre-determined elements of interest regarding rotations in ICU, pediatrics, inpatient (medicine, pediatrics, general surgery), electives, orthopedics, toxicology, and anesthesia. Agreement between the SD and PW was calculated using a Cohen's unweighted kappa calculation. Curricula posted on PWs were considered the gold standard for the programs' current curricula. Results: A total of 117 PGY1-3 programs were identified through the SD and confirmed on the PW. Ninetyone of 117 programs (78%) had complete curricular information on both sites. Only these programs were included in the kappa analysis for SD and PW comparisons. Of programs with complete listings, 66 of 91 programs (73%) had at least one discrepancy. The agreement of information between PW and SD revealed a kappa value of 0.26 (95% CI 0.19-0.33). Analysis of PW revealed that PGY1-3 programs have an average of 4.15 (range, 2-9), 3.1 (range, 1-6), 1.7 (range, 0-4), and 1.0 (range, 0-4) blocks of ICU, pediatrics, elective, and inpatient, respectively. Common but not RRC-mandated rotations in orthopedics, toxicology, and anesthesiology are present in 77, 80, and 93 percent of programs, respectively. Conclusion: Publicly accessible curricular information through the SD and PW for PGY1-3 EM programs only has fair agreement (using commonly accepted kappa value guides). Applicants may be confused by the variability of data and draw inaccurate conclusions about program curricula. from the gravid uterus and improves cardiac output; however, this theory has never been proven. Objectives: We set out to determine the difference in inferior vena cava (IVC) filling when third trimester patients were placed in supine, LLT, and right lateral tilt (RLT) positions using IVC ultrasound. Methods: Healthy pregnant women in their third trimester presenting to the labor and delivery suite were enrolled. Patients were placed in three different positions (supine, RLT, and LLT) and IVC maximum (max) and minimum (min) measurements were obtained using the intercostal window in short axis approximately two centimeters below the entry of the hepatic veins. IVC collapse index (CI) was calculated for each measurement using the formula (max-min)/max. In addition, blood pressure, heart rate, and fetal heart rate were monitored. Patients stayed in each position for at least 3 minutes prior to taking measurements. We compared IVC measurements using a one-way analysis of variance for repeated measures. Results: Twenty patients were enrolled. The average age was 25 years (SD 5.7) with a mean estimated gestational age of 39.5 weeks (SD 1.4). There were no significant differences seen in IVC filling in each of the positions (see table 1 ). In addition, there were no differences in hemodynamic parameters between positions.Ten (50%) patients had the largest IVC measurement in the LLT position, 7 (35%) patients in the RLT position, and 3 (15%) in the supine position. Conclusion: There were no significant differences in IVC filling between patient positions. For some third trimester patients LLT may not be the optimal position for IVC filling. Background: Although the ACGME and RRC require competency assessment in ED bedside ultrasound (US), there are no standardized assessment tools for US training in EM. Objectives: Using published US guidelines, we developed four Observed Structured Competency Evalua-tions (OSCE) for four common EM US exams: FAST, aortic, cardiac, and pelvic. Inter-rater reliability was calculated for overall performance and for the individual components of each OSCE. Methods: This prospective observational study derived four OSCEs that evaluated overall study competency, image quality for each required view, technical factors (probe placement, orientation, angle, gain, and depth), and identification of key anatomic structures. EM residents with varying levels of training completed an OSCE under direct observation of two EM-trained US experts. Each expert was blinded to the other's assessment. Overall study competency and image quality of each required views were rated on a five-point scale (1poor, 2-fair, 3-adequate, 4-good, 5-excellent), with explicit definitions for each rating. Each study had technical factors (correct/incorrect) and anatomic structures (identified/not identified) assessed as binary variables. Data were analyzed using Cohen's and weighted k, descriptive statistics, and 95% CI. Results: A total of 185 US exams were observed, including 33 FAST, 53 cardiac, 53 aorta, and 46 pelvic. Total assessments included 185 ratings of overall study competency, 691 ratings of required view image quality, 2998 ratings of technical factors, and 2978 ratings of anatomic structures. Inter-rater assessment of overall study competency showed excellent agreement, raw agreement 0.84 (0.77, 0.89), weighted k 0.87 (0.82, 0.91). Ratings of required view image quality showed excellent agreement: raw agreement 0.75 (0.72, 0.79), weighted k 0.82 (0.79, 0.84). Inter-rater assessment of technical factors showed substantial agreement: raw agreement 0.96 (0.95, 0.97), Cohen's k 0.78 (0.74, 0.82). Ratings of identification of anatomic structures showed substantial agreement: raw agreement 0.86 (0.85, 0.88), Cohen's k 0.64 (0.60, 0.67). Conclusion: Inter-rater reliability is substantial to excellent using the derived ultrasound OSCEs to rate EM resident competency in FAST, aortic, cardiac, and pelvic ultrasound. Validation of this tool is ongoing. A Objectives: The objective of this study was to identify which transducer orientation, longitudinal or transverse, is the best method of imaging the axillary vein with ultrasound, as defined by successful placement in the vein with one needle stick, no redirections, and no complications. Methods: Emergency medicine resident and attending physicians at an academic medical center were asked to cannulate the axillary vein in a torso phantom model. The participants were randomized to start with either the longitudinal or transverse approach and completed both sequentially, after viewing a teaching presentation. Participants completed pre-and post-attempt questionnaires. Measurements of each attempt were taken regarding time to completion, success, skin punctures, needle redirections, and complications. We compared proportions using a normal binomial approximation and continuous data using the t-distribution, as appropriate. A sample size of 57 was chosen based on the following assumptions: power, 0.8; significance, 0.05; effect size, 50% versus 75%. Results: Fifty-seven operators with a median experience of 85 prior ultrasounds (26 to 120 IQR) participated. First-attempt success frequency was 39/57 (0.69) for the longitudinal method and 21/57 (0.37) for the transverse method (difference 0.32, 95% CI 0.12-0.51); this difference was similar regardless of operator experience. The longitudinal method had fewer redirections (mean difference 1.8, 95% CI 0.8-2.8) and skin punctures (mean difference 0.3, 95% CI )2 to 0.18). Arterial puncture occurred in 2/57 longitudinal attempts and 7/ 57 transverse attempts, with no pleural punctures in either group. Among successful attempts, the time spent was 24 seconds less for longitudinal method (95% CI 3-45). Though 93% of participants had more experience with the transverse method prior to the training session, 58% indicated after the session that they preferred the longitudinal method. Methods: A prospective single-center study was conducted to assess the compressibility of the basilic vein with ultrasound. Healthy study participants were recruited. The compressibility was assessed at baseline, and then further assessed with one proximal tourniquet, two tourniquets (one distal and one proximal), and a proximal blood pressure cuff inflated to 150 mmHg. Compressibility was defined as the vessel's resistance to collapse to external pressure and rated as completely compressible, moderately compressible, or mildly compressible after mild pressure was applied with the ultrasound probe. Results: One-hundred patients were recruited into the study. Ninety-eight subjects were found to have a completely compressible basilic vein at baseline. When one tourniquet and two tourniquets were applied 64 and 58 participants, respectively, continued to have completely compressible veins. A Fisher's Exact test comparing one versus two tourniquets revealed no difference between these two techniques (p = 0.46). Only two participants continued to have completely compressible veins following application of the blood pressure cuff. The compressibility of this group was found to be statistically significant by Fisher's Exact test compared to both tourniquet groups (p < 0.0001). Furthermore, 24 participants with the blood pressure cuff applied were found to have moderately compressible veins and 72 participants were found to have mildly compressible veins. Conclusion: Tourniquets and blood pressure cuffs can both decrease the compressibility of peripheral veins. While there was no difference identified between using one and two tourniquets, utilization of a blood pressure cuff was significantly more effective to decrease compressibility. The findings of this study may be utilized in the emergency department when attempting to obtain peripheral venous access, specifically supporting the use of blood pressure cuffs to decrease compressibility. Background: Electroencephalography (EEG) is an underused test that can provide valuable information in the evaluation of emergency department (ED) patients with altered mental status (AMS). In AMS patients with nonconvulsive seizure (NCS), EEG is necessary to make the diagnosis and to initiate proper treatment. Yet, most cases of NCS are diagnosed >24 h after ED presentation. Obstacles to routine use of EEG in the ED include space limitations, absence of 24/7 availability of EEG technologists and interpreters, and the electrically hostile ED environment. A novel miniature portable wireless device (microEEG) is designed to overcome these obstacles. Objectives: To examine the diagnostic utility of micro-EEG in identifying EEG abnormalities in ED patients with AMS. Methods: An ongoing prospective study conducted at two academic urban EDs. Inclusion: Patients ‡13 years old with AMS. Exclusion: An easily correctable cause of AMS (e.g. hypoglycemia, opioid overdose). Three 30-minute EEGs were obtained in random order from each subject beginning within one hour of presentation: 1) a standard EEG, 2) a microEEG obtained simultaneously with conventional cup electrodes using a signal splitter, and 3) a microEEG using an Electrocap. Outcome: operative characteristics of micro-EEG in identifying any EEG abnormality. All EEGs were interpreted in a blinded fashion by two board-certified epileptologists. Within each reader-patient pairing, the accuracy of EEGs 2 and 3 were each assessed relative to EEG 1. Sensitivity, specificity, and likelihood ratios (LR) are reported for microEEG by standard electrodes and Electrocap (EEGs 2 and 3). Inter-rater variability for EEG interpretations is reported with kappa. Results: The interim analysis was performed on 130 consecutive patients (target sample size: 260) enrolled from May to October 2011 (median age: 61, range: 13-100, 40% male). Overall, 82% (95% confidence interval [CI], 76-88%) of interpretations were abnormal (based on EEG1). Kappa values representing the agreement of neurologists in interpretation of EEG 1-3 were 0.54 (0.36-0.73), 0.57 (0.39-0.75), and 0.55 (0.37-0.74), respectively. Conclusion: The diagnostic accuracy and concordance of microEEG are comparable to those of standard EEG but the unique ED-friendly characteristics of the device could help overcome the existing barriers for more frequent use of EEG in the ED. (Originally submitted as a ''late-breaker.'') A Background: Patients who use an ED for acute migraine are characterized by higher migraine disability scores, lower socio-economic status, and are unlikely to have used a migraine-specific medication prior to ED presentation. Objectives: To determine if a comprehensive migraine intervention, delivered just prior to ED discharge, could improve migraine impact scores one month after the ED visit. Methods: This was a randomized controlled trial of a comprehensive migraine intervention versus typical care among patients who presented to an ED for management of acute migraine. At the time of discharge, for patients randomized to comprehensive care, we reinforced their diagnosis, shared a migraine education presentation from the National Library of Medicine, provided them with six tablets of sumatriptan 100 mg and 14 tablets of naproxen 500 mg, and if they wished, provided them with an expedited free appointment to our institution's Headache Clinic. Patients randomized to typical care received the care their attending emergency physician felt was appropriate. The primary outcome was a between-group comparison of the HIT6 score, a validated headache assessment instrument, one month after ED discharge. Secondary outcomes included an assessment of satisfaction with headache care and frequency of use of migraine-specific medication within that one month period. The outcome assessor was blinded to assignment. Results: Over a 19 month period, 50 migraine patients were enrolled. One month follow-up was successfully obtained in 92% of patients. Baseline characteristics were comparable. One month HIT6 scores in the two groups were nearly identical (59 vs 56, 95%CI for difference of 3: )5, 11), as was dissatisfaction with overall headache care (17% versus 18%, 95%CI for difference of 1%: )22, 24%). Not surprisingly, patients randomized to the comprehensive intervention were more likely to be using triptans or migraine-preventive therapy (43% versus 0%, 95%CI for difference of 43%: 20, 63%) one month later. Conclusion: A comprehensive migraine intervention, when compared to typical care, did not improve HIT6 scores one month after ED discharge. Future work is needed to define a migraine intervention that is practical and useful in an ED. Background: Lumbar puncture (LP) is the standard of care for excluding non-traumatic subarachnoid hemorrhage (SAH), and is usually performed following head CT (HCT). However, in the setting of a non-diagnostic HCT, LP demonstrates a low overall diagnostic yield for SAH (<1% positive rate). Objectives: To describe a series of ED patients diagnosed with SAH by LP following a non-diagnostic HCT, and, when compared to a set of matched controls, determine if clinical variables can reliably identify these ''CT-negative/LP-positive'' patients. Methods: Retrospective case-control chart review of ED patients in an integrated health system between the years 2000-2011 (estimated 5-6 million visits among 18 EDs). Patients with a final diagnosis of non-traumatic SAH were screened for case inclusion, defined as an initial HCT without SAH by final radiologist interpretation and a LP with >5 red blood cells/mm 3 , along with either 1) xanthochromic cerebrospinal fluid, 2) angiographic evidence of cerebral aneurysm or arteriovenous malformation, or 3) head imaging showing SAH within 48 hours following LP. Control patients were randomly selected among ED patients diagnosed with headache following a negative SAH evaluation with HCT and LP. Controls were matched to cases by year and presenting ED in a 3:1 ratio. Stepwise logistic regression and Classification and Regression Tree Analysis (CART) were employed to identify predictive variables. Inter-rater reliability (kappa) was determined by independent chart review. Results: Fifty-five cases were identified. All cases were Hunt-Hess grade 1 or 2. Demographics are shown in Table 1 . Thirty-four cases (62%) had angiographic evidence of SAH. Five variables were identified that positively predicted SAH following a normal HCT with 98% sensitivity (95% CI, 90-100%) and 25% specificity (95% CI, 19-32%): age > 50 years, neck pain or stiffness, onset of headache with exertion, vomiting with headache, or loss of consciousness at headache onset. Kappa values for selected variables ranged from 0.75-1.0 (18% sample). The c-statistic (AUC) and Hosmer-Lemeshow test p-value for the logistic regression model are 0.87 and 0.74, respectively (Table 2) . Conclusion: Several clinical variables can help safely limit the amount of invasive testing for SAH following a non-diagnostic HCT. Prospective validation of this model is needed prior to practice implementation. Background: Post-thrombolysis intracerebral hemorrhage (ICH) is associated with poor outcomes. Previous investigations have attempted to determine the relationship between pre-existing anti-platelet (AP) use and the safety of intravenous thrombolysis, but have been limited by low event rates thus decreasing the precision of estimates. Objectives: Our objective was to determine whether pre-existing AP therapy increases the risk of ICH following thrombolysis. Methods: Consecutive cases of ED-treated thrombolysis patients were identified using multiple methods, including active and passive surveillance. Retrospective data were collected from four hospitals from 1996-2005, and 24 distinct hospitals from 2007-2010 as part of a cluster randomized trial. The same chart abstraction tool was used during both time periods and data were subjected to numerous quality control checks. Hemorrhages were classified using a pre-specified methodology: ICH was defined as presence of hemorrhage in radiographic interpretations of follow up imaging (primary outcome). Symptomatic ICH (secondary outcome) was defined as radiographic ICH with associated clinical worsening. A multivariable logistic regression model was constructed to adjust for clinical factors previously identified to be related to postthrombolysis ICH. As there were fewer SICH events, the multivariable model was constructed similarly, except that variables divided into quartiles in the primary analysis were dichotomized at the median. Results: There were 830 patients included, with 47% having documented pre-existing AP treatment. The mean age was 69 years, the cohort was 53% male, and the median NIHSS was 12. The unadjusted proportion of patients with any ICH was 15.1% without AP and 19.3% with AP (difference 4.2%, 95% CI )1.2% to 9.6%); for SICH this was 6.1% without AP and 9% with AP (difference 3.1%, 95%CI )1 to 6.7%). No significant association between pre-existing AP treatment with radiographic or symptomatic ICH was observed (table) . Conclusion: We did not find that AP treatment was associated with post-thrombolysis ICH or SICH in this cohort of community treated patients. Pre-existing tobacco use, younger age, and lower severity were associated with lower odds of SICH. An association between AP therapy and SICH may still exist -further research with larger sample sizes is warranted in order to detect smaller effect sizes. Background: Post-cardiac arrest therapeutic hypothermia (TH) improves survival and neurologic outcome after cardiac arrest, but the parameters required for optimal neuroprotection remain uncertain. Our laboratory recently reported that 48-hour TH was superior to 24-hour TH in protecting hippocampal CA1 pyramidal neurons after asphyxial cardiac arrest in rats. Cerebellar Purkinje cells are also highly sensitive to ischemic injury caused by cardiac arrest, but the effect of TH on this neuron population has not been previously studied. Objectives: We examined the effect of post-cardiac arrest TH onset time and duration on Purkinje neuron survival in cerebella collected during our previous study. Methods: Adult male Long Evans rats were subjected to 10-minute asphyxial cardiac arrest followed by CPR. Rats that achieved return of spontaneous circulation (ROSC) were block randomized to normothermia (37.0 deg C) or TH (33.0 deg C) initiated 0, 1, 4, or 8 hours after ROSC and maintained for 24 or 48 hours (n = 21 per group). Sham injured rats underwent anesthesia and instrumentation only. Seven days post-cardiac arrest or sham injury, rats were euthanized and brain tissue was processed for histology. Surviving Purkinje cells with normal morphology were quantified in the primary fissure in Nissl stained sagittal sections of the cerebellar vermis. Purkinje cell density was calculated for each rat, and group means were compared by ANOVA with Bonferroni analysis. Results: Purkinje cell density averaged (+/) SD) 35.9 (2.4) cells/mm in sham-injured rats. Neuronal survival in normothermic post-cardiac arrest rats was significantly reduced compared to sham (10.7% (5.0%)). Overall, TH resulted in significant neuroprotection compared to normothermia (38.9% (15.7%) of sham). Purkinje cell density with 24-hour duration TH was 35.0% (11.2%) of sham and 48-hour duration TH was 43.3% (15.6%), both significantly improved from sham (p = 0.245 between durations). TH initiated 0, 1, 4, and 8 hours post-ROSC provided similar benefit: 44.6% (21.6%), 33.2% (8.1%), 36.6% (12.9%), and 41.1% (9.3%) of sham, respectively. Conclusion: Overall, these results indicate that postcardiac arrest TH protects cerebellar Purkinje cells with a broad therapeutic window. Our results underscore the importance of considering multiple brain regions when optimizing the neuroprotective effect of post-cardiac arrest TH. The Effect of Compressor-Administered Defibrillation on Peri-Shock Pauses in a Simulated Cardiac Arrest Scenario Joshua Glick, Evan Leibner, Thomas Terndrup Penn State Hershey Medical Center, Hershey, PA Background: Longer pauses in chest compressions during cardiac arrest are associated with a decreased probability of successful defibrillation and patient survival. Having multiple personnel share the tasks of performing chest compressions and shock delivery can lead to communication complications that may prolong time spent off the chest. Objectives: The purpose of this study was to determine whether compressor-administered defibrillation led to a decrease in pre-shock and peri-shock pauses as compared to bystander-administered defibrillation in a simulated in-hospital cardiac arrest scenario. We hypothesized that combining the responsibilities of shock delivery and chest-compression performance may lower no-flow periods. Methods: This was a randomized, controlled study measuring pauses in chest compressions for defibrillation in a simulated cardiac arrest. Medical students and ED personnel with current CPR certification were surveyed for participation between July 2011 and October 2011. Participants were randomized to either a control (facilitator-administered shock) or variable (participantadministered shock) group. All participants completed one minute of chest compressions on a mannequin in a shockable rhythm prior to initiation of prompt and safe defibrillation. Pauses for defibrillation were measured and compared in both study groups. Results: Out of 200 total enrollments, the data from 197 defibrillations were analyzed. Subject-initiated defibrillation resulted in a significantly lower pre-shock handsoff time (0.57 s; 95% CI: 0.47-0.67) compared to facilitator-initiated defibrillation (1.49 s; 95% CI: 1.35-1.64). Furthermore, subject-initiated defibrillation resulted in a significantly lower peri-shock hands-off time (2.77 s; 95% CI: 2.58-2.95) compared to facilitator-initiated defibrillation (4.25 s; 95% CI: 4.08-4.43). Conclusion: Assigning the responsibility for shock delivery to the provider performing compressions encourages continuous compressions throughout the charging period and decreases total time spent off the chest. This modification may also decrease the risk of accidental shock and improve patient survival. However, as this was a simulation-based study, clinical implementation is necessary to further evaluate these potential benefits. Objectives: To determine the sensitivity and specificity of peripheral venous oxygen (pO 2 ) to predict abnormal central venous oxygen saturation in septic shock patients in the ED. Methods: Secondary analysis of an ED-based randomized controlled trial of early sepsis resuscitation targeting three physiological variables: CVP, MAP, and either ScvO 2 or lactate clearance. Inclusion criteria: suspected infection, two or more SIRS criteria, and either systolic blood pressure <90 mmHg after a fluid bolus or lactate >4 mM. Peripheral venous pO 2 was measured prior to enrollment as part of routine care, and ScvO 2 was measured as part of the protocol. We analyzed for agreement between venous pO 2 and ScvO 2 using Spearman's rank. Sensitivity and specificity to predict an abnormal ScvO 2 (<70%) were calculated for each incremental value of pO 2 . Results: A total of 175 were analyzed. Median pO 2 was 43 mmHg (IQR 32, 55). Median initial ScvO 2 was 79% (IQR 70, 88). Thirty-nine patients (23%) had an initial ScvO 2 < 70%. Spearman's rank demonstrated fair correlation between initial pO 2 and ScvO 2 (q = 0.26). A cutoff of venous pO 2 < 57 was 90% sensitive and 20% specific for detecting an initial ScvO 2 < 70%. Twenty-seven patients (20%) demonstrated an initial pO 2 of >56. Conclusion: In ED septic shock patients, venous pO 2 demonstrated only fair correlation with ScvO 2, though a cutoff value of 56 was sensitive for predicting an abnormal ScvO 2 . Twenty percent of patients demonstrated an initial value above the cutoff, potentially representing a group in whom ScvO 2 measurement could be avoided. Future studies aiming to decrease central line utilization could consider the use of peripheral O 2 measurements in these patients. sessions. Ninety-two percent were RNs, median clinical experience was 11-15 years, and 56% were from an intensive care unit. Provider confidence increased significantly with a single session despite the highly experienced sample (Figure 1 ). There was a trend for further increased confidence with an additional session and the increased confidence was maintained for at least 3-6 months given the normal sensitivity analysis. Conclusion: High fidelity simulation significantly increases provider confidence even among experienced providers. This study was limited by its small sample size and recent changes in ACLS guidelines. Background: Recent data suggest alarming delays and deviations in major components of pediatric resuscitation during simulated scenarios by pediatric housestaff. Objectives: To identify the most common errors of pediatric residents during multiple simulated pediatric resuscitation scenarios. Methods: A retrospective observational study conducted in an academic tertiary care hospital. Pediatric residents (PGY1 and PGY3) were videotaped performing a series of five pediatric resuscitation scenarios using a high-fidelity simulator (Simbaby, Laerdal): pulseless non-shockable arrest, pulseless shockable arrest, dysrhythmia, respiratory arrest, and shock. The primary outcome was the presence of significant errors prospectively defined using a validated scoring instrument designed to assess sequence, timing, and quality of specific actions during resuscitations based on the 2005 AHA PALS guidelines. Residents' clinical performances were measured by a single video reviewer. The primary analysis was the proportion of errors for each critical task for each scenario. We estimated that the evaluation of each resident would provide a confidence interval less than 0.20 for the proportion of errors. Results: Twenty-four of 25 residents completed the study. Across all scenarios, pulse check was delayed by more than 30 seconds in 56% (95%CI: 46%-66%). For non-shockable arrest, CPR was started more than 30 seconds after recognizing arrest in 21% (95%CI 7-42%) and inappropriate defibrillation was performed in 29% (95%CI 13-51%). For shockable arrest, participants failed to identify the rhythm in 58% (95%CI 37-78%), CPR was not performed in 25% (95%CI 10-47%), while defibrillation was delayed by more than 90 seconds in 33% (95%CI 16-51%) and not performed in one case. For shock, participants failed to ask for a dextrose check in 71% (95%CI 51-86%), and it was delayed by more than 60 seconds for all others. Conclusion: The most common error across all scenarios was delay in pulse check. Delays in starting CPR and inappropriate defibrillation were common errors in non-shockable arrests, while failure to identify rhythm, CPR omission, and delaying defibrillation were noted for shockable arrests. For shock, omission of rapid dextrose check was the most common error, while delaying the test when ordered was also significant. Future training in pediatric resuscitation should target these errors. Background: Many scoring instruments have been described to measure clinical performance during resuscitation; however, the validity of these tools has yet to be proven in pediatric resuscitation. Objectives: To determine the external validity of published scoring instruments to evaluate clinical performance during simulated pediatric resuscitations using PALS algorithms and to determine if inter-rater reliability could be assessed. Methods: This was a prospective quasi-experimental design performed in a simulation lab of a pediatric tertiary care facility. Participants were residents from a single pediatric program distinct from where the instrument was originally developed. A total of 13 PGY1s and 11 PGY3s were videotaped during five simulated pediatric resuscitation scenarios. Pediatric emergency physicians rated resident performances before and after a PALS course using standardized scoring. Each video recording was viewed and scored by two raters blinded to one another. A priori, it was determined that, for the scoring instrument to be valid, participants should improve their scores after participating in the PALS course. Differences in means between pre-PALS and post-PALS and PGY1 and PGY3 were compared using an ANOVA test. To investigate differences in the scores of the two groups over the five scenarios, a two-factor ANOVA was used. Reliability was assessed by calculating an interclass correlation coefficient for each scenario. Results: Following the PALS course, scores improved by 8.6% (3.8 to 13.3), 15.7% (8.6 to 22.7), 6.3% ()1.8 to 14.3), 18.2% (9.3 to 27), and 4.1% ()3.0 to 11.2) for the pulseless non-shockable arrest, pulseless shockable arrest, dysrhythmia, respiratory, and shock scenarios respectively. There were no differences in scores between PGY1s and PGY3s before and after the PALS course. There was an excellent reliability for each scoring instrument with ICCs varying between 0.85 and 0.98. Conclusion: The scoring instrument was able to demonstrate significant improvements in scores following a PALS course for PGY1 and PGY3 pediatric residents for the pulseless non-shockable arrest, pulseless shockable, and respiratory arrest scenarios only. However, it was unable to discriminate between PGY1s and PGY3s both before and after the PALS course for any scenarios. The scoring instrument showed excellent inter-reliability for all scenarios. A Background: Medical simulation is a common and frequently studied component of emergency medicine (EM) residency curricula. Its utility in the context of EM medical student clerkships is not well defined. Objectives: The objective was to measure the effect of simulation instruction on medical students' EM clerkship oral exam performance. We hypothesized that students randomized to the simulation group would score higher. We predicted that simulation instruction would promote better clinical reasoning skills and knowledge expression. Methods: This was a randomized observational study conducted from 7/2009 to 5/2010. Participants were fourth year medical students in their EM clerkship. Students were randomly assigned on their first day to one of two groups. The study group received simulation instruction in place of one of the lectures, while the control group was assigned to the standard curriculum. The standard clerkship curriculum includes lectures, case studies, procedure labs, and clinical shifts without simulation. At the end of the clerkship, all students participated in written and oral exams. Graders were not blinded to group allocation. Grades were assigned based on a pre-defined set of criteria. The final course composite score was computed based on clinical evaluations and the results of both written and oral exams. Oral exam scores between the groups were compared using a two-sample t-test. We used the Spearman rank correlation to measure the association between group assignment and the overall course grade. The study was approved by our institutional IRB. Results: Sixty-one students participated in the study and were randomly assigned to one of two groups. Twenty-nine (47.5%) were assigned to simulation and the remaining 32 (52.5%) students were assigned to the standard curriculum. Students assigned to the simulation group scored 5.34% (95% CI 2.78-7.91%) higher on the oral exam than the non-simulation group. Additionally, simulation was associated with a higher final course grade (p < 0.05). Limitations of this pilot study include lack of blinding and interexaminer variability. Conclusion: Simulation training as part of an EM clerkship is associated with higher oral exam scores and higher overall course grade compared to the standard curriculum. The results from this pilot study are encouraging and support a larger, more rigorous study. Initial approaches to common complaints are taught using a standard curriculum of lecture and small group case-based discussion. We added a simulation exercise to the traditional altered mental status (AMS) curriculum with the hypothesis that this would positively affect student knowledge, attitudes, and level of clinical confidence caring for patients with AMS. Methods: AMS simulation sessions were conducted in June 2010 and 2011; student participation was voluntary. The simulation exercises included two AMS cases using a full-body simulator and a faculty debriefing after each case. Both students who did and did not participate in the simulations completed a written post-test and a survey related to confidence in their approach to AMS. Results: 154 students completed the post-test and survey. 65 (42%) attended the simulation session. 48 (31%) attended all three sessions. 58 (38%) participated in the lecture and small group. 15 (10%) did not attend any session. Post-test scores were higher in students who attended the simulations versus those who did not: 7 (IQR, 6-8) vs. 6 (IQR, 4-7); P < 0.001. Students who attended the simulations felt more confident about assessing an AMS patient (58% vs. 42%; P = 0.05), articulating a differential diagnosis (66% vs. 47%; P = 0.03), and knowing initial diagnostic tests (74% vs. 53%; P = 0.01) and initial interventions (79% vs. 56%; P = 0.003) for an AMS patient. Students who attended the simulations were more likely to rate the overall AMS curriculum as useful (94% vs. 61%; P < 0.001). Conclusion: Addition of a simulation session to a standard AMS curriculum had a positive effect on student performance on a knowledge-based exam and increased confidence in clinical approach. The study's major limitations were that student participation in the simulation exercise was voluntary and that effect on applied skills was not measured. Future research will determine whether simulation is effective for other chief complaints and if it improves actual clinical performance. Background: The ACGME has defined six core competencies for residents including ''Professionalism'' and ''Interpersonal and Communication Skills.'' Integral to these two competencies is empathy. Prior studies suggest that self-reported empathy declines during medical training; no reported study has yet integrated simulation into the evaluation of empathy in medical training. Objectives: To determine if there is a relation between level of training and empathy in patient interactions as rated during simulation. Methods: This is a prospective observational study at a tertiary care center comparing participants at four different levels of training: first (MS1) and third year (MS3) medical students, incoming EM interns (PGY1), and EM senior residents (PGY3/4). Trainees participated in two simulation scenarios (ectopic pregnancy and status asthmaticus) in which they were responsible for clinical management (CM) and patient interactions (PI). This was the first simulation exposure during an established simulation curriculum for MS1, MS3, and PGY1. Two independent raters reviewed videotaped simulation scenarios using checklists of critical actions for clinical management (CM: 0-11 points) and patient interactions (PI: 0-17 points). Inter-rater reliability was assessed by intra-class correlation coefficients (ICCs Objectives: We explored attitudes and beliefs about the handoff, using qualitative methods, from a diverse group of stakeholders within the EMS community. We also characterized perceptions of barriers to high-quality handoffs and identified strategies for optimizing this process. Methods: We conducted seven focus groups at three separate gatherings of EMS professionals (one local, two national) in 2010/2011. Snowball sampling was used to recruit 48 participants with diverse professional, experiential, geographic, and demographic characteristics. Focus groups, lasting 60-90 minutes, were moderated by investigators trained in qualitative methods, using an interview guide to elicit conversation. Recordings of each group were transcribed. Three reviewers analyzed the text in a multi-stage iterative process to code the data, describe the main categories, and identify unifying themes. Results: Participants included EMTs, paramedics, physicians, and nurses. Clinical experience ranged from 4 months to 36 years. Recurrent thematic domains when discussing attitudes and beliefs were: perceptions of respect and competence, professionalism, teamwork, value assigned to the process, and professional duty. Modifiers of these domains were: hierarchy, skill/training level, severity/type of patient illness, and system/ regulatory factors. Strategies to improving barriers to the handoff included: fostering familiarity and personal connections between EMS and ED staff, encouraging two-way conversations, feedback, and direct interactions between EMS providers and ED physicians, and optimizing ways for EMS providers to share subjective impressions (beyond standardized data elements) with hospital-based care teams. Conclusion: EMS professionals assign high value to the ED handoff. Variations in patient acuity, familiarity with other handoff participants, and perceptions of respect and professionalism appear to influence the perceived quality of this transition. Regulatory strategies to standardize the contents of the handoff may not alone overcome barriers to this process. miology, public health) then developed an approach to assign EMS records to one of 20 symptom-based illness categories (gastrointestinal illness, respiratory, etc). EMS encounter records were characterized into these illness categories using a novel text analytic program. Event alerts were identified across the state and local regions in illness categories using either change detection from baseline with (CUSUM) analysis (three standard deviations) and a novel text-proportion (TAP) analysis approach (SAS Institute, Cary, NC). Results: 2.4 million EMS encounter records over a 2year period were analyzed. The initial analysis focused upon gastrointestinal illness (GI) given the potential relationship of GI distress to infectious outbreaks, food contamination and intentional poisonings (ricin). After accounting for seasonality, a significant GI event was detected in Feb 2010 (see red circle on graph). This event coincided with a confirmed norovirus outbreak. The use of CUSUM approach (yellow circle on graph) detected the alert event on Jan 24, 2010. The novel TAP approach on a regional basis detected the alert on Dec 6, 2009. Conclusion: EMS has the advantage of being an early point of contact with patients and providing information on the location of insult or injury. Surveillance based on EMS information system data can detect emergent outbreaks of illness of interest to public health. A novel text proportion analytic technique shows promise as an early event detection method. Assessing Chronic Stress In The Emergency Medical Services Elizabeth A. Donnelly 1 , Jill Chonody 2 1 University of Windsor, Windsor, ON, Canada; 2 University of South Australia, Adelaide, Australia Background: Attention has been paid to the effect of critical incident stress in the emergency medical services (EMS); however, less attention has been given to the effect of chronic stress (e.g., conflict with administration or colleagues, risk of injury, fatigue, interference in non-work activities) in EMS. A number of extant instruments assess for workplace stress; however, none address the idiosyncratic aspects of work in EMS. Objectives: The purpose of this study was to validate an instrument, adapted from McCreary and Thompson (2006) , that assesses levels of both organizational and operational work-related chronic stress in EMS personnel. Methods: To validate this instrument, a cross-sectional, observational web-based survey was used. The instrument was distributed to a systematic probability sample of EMTs and paramedics (n = 12,000). The survey also included the Perceived Stress Scale (Cohen, 1983) to assess for convergent construct validity. Results: The survey attained a 13.6% usable response rate (n = 1633); respondent characteristics were consistent across demographic characteristics with other studies of EMTs and paramedics. The sample was split in order to allow for exploratory and confirmatory fac-tor analyses (n = 847/n = 786). In the exploratory factor analysis, principal axis factoring with an oblique rotation revealed a two-factor, 34-item solution (KMO = 0.943, v 2 = 23344.38, df = 561, p £.001). Confirmatory factor analysis suggested a more parsimonious, two-factor, 20-item solution (v 2 = 632.67, df = 168, p £ 0.001, RMSEA = 0.06, CFI = 0.92, TLI = 0.91, SRMR = 0.04). The factors demonstrated good internal reliability (operational stress a = 0.877, organizational stress a = 0.868). Both factors were significantly correlated (p £ 0.01) with the hypothesized convergent validity measure. Conclusion: Theory and empirical research indicate that exposure to chronic workplace stress may play an important part in the development of psychological distress, including burnout, depression, and posttraumatic stress disorder (PTSD). Workplace stress and stress reactions may potentially interfere with job performance. As no extant measure assesses for chronic workplace stress in EMS, the validation of this chronic stress measure enhances the tools EMS leaders and researchers have in assessing the health and well-being of EMS providers. Effect of Naltrexone Background: Survivors of sarin and other organophosphate poisoning can develop delayed encephalopathy that is not prevented by standard antidotal therapy with atropine and pralidoxime. A rat model of poisoning with the sarin analogue diisoprophylfluorophosphate (DFP) demonstrated impairment of spatial memory despite antidotal therapy with atropine and pralidoxime. Additional antidotes are needed after acute poisonings that will prevent the development of encephalopathy. Objectives: To determine the efficacy of naltrexone in preventing delayed encephalopathy after poisoning with the sarin analogue DFP in a rat model. The hypothesis is that naltrexone would improve performance on spatial memory after acute DFP poisoning. The sarin analogue DFP was used because it has similar toxicity to sarin while being less dangerous to handle. Methods: A randomized controlled experiment at a university animal research laboratory of the effects of naltrexone on spatial memory after DFP poisoning was conducted. Long Evans rats weighing 250-275 grams were randomized to DFP group (n = 4, rats received a single intraperitoneal (IP) injection of DFP 5 mg/kg) or DFP+naltrexone group (n = 5, rats received a single IP injection of DFP (5 mg/kg) followed by naltrexone 5 mg/kg/day). After injection, rats were monitored for signs and symptoms of cholinesterase toxicity. If toxicity developed, antidotal therapy was initiated with atro-Background: One of the primary goals of management of patients presenting with known or suspected acetaminophen (APAP) ingestion is to identify the risk for APAP-induced hepatotoxicity. Current practice is to measure APAP level at a minimum of 4 hours post ingestion and plot this value on the Rumack-Matthew nomogram. One retrospective study of APAP levels drawn less than 4 hours post-ingestion found a level less than 100 mcg/ml to be sufficient to exclude toxic ingestion. Objectives: The aim of this study was to prospectively determine the negative predictive value (NPV) for toxicity of an APAP level of less than 100 mcg/ml obtained less than 4 hours post-ingestion. Methods: This was a multicenter prospective cohort study of patients presenting to one of five tertiary care hospitals that are part of the Toxicology Investigator's Consortium (ToxIC). Eligible patients presented to the emergency department less than 4 hours after known or suspected ingestion and had the initial APAP level obtained at greater than 1 but less than 4 hours post ingestion. A second APAP level was obtained at 4 hours or more post-ingestion and plotted on the Rumack-Matthew nomogram to determine risk of toxicity. The outcome of interest was the NPV of an initial APAP level less than 100 mcg/ml. A power analysis based on an alpha = 0.05 and power of 0.80 yielded the requirement of 71 subjects. Results: Data were collected on 171 patients over a 30month period from May 2009 to Nov 2011. Patients excluded from NPV analysis consisted of: initial APAP level greater than 100 mcg/ml (31), negligible APAP level on both the initial and confirmatory APAP level (31), initial APAP level drawn less than one hour after ingestion (15), or an unknown time of ingestion (1). Ninety-three patients met the eligibility criteria. Two patients (2.2%) with an initial APAP level less than 100 mcg/ml (54 mcg/ml at 90 min, 38 mcg/ml at 84 min) were determined to be at risk for toxicity based on OH S330 2012 SAEM ANNUAL MEETING ABSTRACTS Implementation of an Emergency Department Sign-Out Checklist Improves Patient Hand-offs at Change of Shift Nicole M MA Computer-Assisted Self-Interviews Improve Testing for Chlamydia and Gonorrhea in the Pediatric Emergency Department Is the Australian Triage System a Better Indicator of Psychiatric Patients' Needs for Intervention than the ENA Emergency Severity Index Triage System? Patients were given an initial dose of 10 mg droperidol intramuscularly followed by an additional dose of 10 mg after 15 min if required. Inclusion criteria were patients requiring physical restraint and parenteral sedation. The primary outcome was the time to sedation. Secondary outcomes were the proportion of patients requiring additional sedation within the first hour, over-sedation measured as -3 on the sedation assessment tool, and respiratory compromise measured as oxygen saturation <90%. Results: Droperidol was administered to 424 patients and 370 of these had sedation scores documented. Presentations included 56% with alcohol intoxication. Dose ranged from 2.5 mg to 30 mg, median 10 mg (interquartile range Conclusion: Droperidol is effective for rapid sedation for ABD and rarely causes over-sedation Serum creatinine (SCr) is widely used to predict risk; however, GFR is a better assessment of kidney function. Objectives: To compare the ability of GFR and SCr to predict the development of CIN among ED patients receiving CECTs. We hypothesized that GFR would be the best available predictor of CIN. Methods: This was a retrospective chart review of ED patients ‡18 years old who had a chest or abdomen/pelvis CECT between 06/01/11 and 07/31/11. Baseline and follow-up SCr levels were recorded. Patients with initial SCr >1.6 mg/dL were excluded, as per hospital Radiology Department protocol. CIN was defined as a SCr increase of either 25%, 0.5 mg/dL, or a GFR decrease of 25% within 72 hours of contrast exposure. GFR was calculated using the CKD EPI and MDRD formulae, and analyzed in original units and categorized form (<60, ‡60) With each additional unit decrease in CKD EPI, subjects were 3% more likely to develop CIN (OR = 1.03) (p < 0.0281). Additionally, subjects with CKD EPI <60 were 3.20 (OR) times more likely to have CIN than subjects with CKD EPI ‡60 In original units, CKD EPI (p < 0.0001) and MDRD (p < 0.0016) both had a significantly higher AUC than SCr. Conclusion: Age, as an independent variable, is the best predictor of CIN, when compared with SCr and GFR. Due to a small number of cases with CIN, the confidence intervals associated with the odds ratios are wide. Future research should focus on patient risk stratification and establishing ED interventions to prevent CIN. 694 A Rat Model of Carbon Monoxide Induced Neurotoxicity Heather Ellsworth Non-traumatic Subarachnoid Hemorrhage Diagnosed by Lumbar Puncture following Non-diagnostic Head CT: A Retrospective Case-Control Study and Decision A DASS score of >14 has been previously defined as an indicator of increased stress levels. Multivariable logistic regression was utilized to identify demographic and work-life characteristics significantly associated with stress. Results: 53.6% of individuals responded to the survey (34,340/64,032) and prevalence of stress was estimated at 5.9%. The following work-life characteristics were associated with stress: certification level, work experience, and service type. The odds of stress in paramedics was 32% higher when compared to EMT-Basics (OR = 1.32, 95% CI = 1.23-1.42). When compared to £2 years of experience 28-2.18) were more likely to be stressed. EMS professionals working in county (OR = 1 CI = 1.07-1.51) and private services (OR = 1 56) were more likely than those working in fire-based services to be stressed. The following demographic characteristics were associated with stress: general health and smoking status Finally, former smokers (OR = 1.34, 95% CI = 1.17-1.54) and current smokers (OR = 1.37, 95% CI = 1.18-1.59) were more likely to be stressed than non-smokers Literature suggests this is within the range of stress among nurses, and lower than physicians. While the current study was able to identify demographic and work-life characteristics associated with stress, the long-term effects are largely unknown Methods: DESIGN: Prospective randomized controlled trial. SUBJECTS: Female Sus scrofa swine weighing 45-55kg were infused with amitriptyline 0.5 mg/kg/minute until the MAP fell to 60% of baseline values. Subjects were then randomized to experimental group (IFE 7 mL/kg followed by an infusion of 0.25 mL/kg/minute) or control group (SB 2 mEq/kg plus equal volume of normal saline). INTERVENTIONS: We measured continuous heart rate (HR), sBP, MAP, cardiac output (CO), systemic vascular resistance (SVR), and venous oxygen saturation (SvO 2 ). Laboratory values monitored included pH, pCO 2 , bicarbonate, lactate, and amitriptyline levels. Descriptive statistics including means, standard deviations, standard errors of measurement, and confidence limits were calculated. Results: Of 14 swine, seven each were allocated to IFE and SB groups. There was no difference at baseline for each group regarding HR, sBP, MAP, CO, SVR, or SvO 2 . IFE and SB groups required similar mean amounts of TCA to reach hypotension One IFE and two SB pigs survived. Conclusion: In this interim data analysis of amitriptyline-induced hypotensive swine, we found no difference in mitigating hypotension between IFE and SB Lipid Rescue 911: A Survey Of Poison Center Medical Directors Regarding Intravenous Fat Emulsion Therapy Michael R. Christian 1 , Erin M. Pallasch Cook County Hospital (Stroger), Chicago, IL 745 Reliability Of Non-toxic Acetaminophen Concentrations Obtained Less Than 4 Hours After Ingestion Evaluating Age in the Field Triage of Injured Background: HIV screening in EDs is advocated to achieve the goal of comprehensive population screening. Yet, HIV testing in the ED is sometimes thwarted by a patient's condition (e.g. intoxication) or environmental factors (e.g. other care activities). Whether it is possible to test these patients at a later time is unknown. Objectives: We aimed to determine if ED patients who were initially unable to receive an HIV testing offer might be tested in the ED at a later time. We hypothesized that factors preventing testing are transient and that there are subsequent opportunities to repeat testing offers. Methods: We reviewed medical records for patients presenting to an urban, academic ED who were approached consecutively to offer HIV testing during randomly selected periods from January 2008 to January 2009. Patients for whom the initial attempted offer could not be completed were reviewed in detail with standardized abstraction forms, duplicate abstraction, and third-party discrepancy adjudication. Primary outcomes included repeat HIV testing offers during that ED visit, and whether a testing offer might eventually have been possible either during the initial visit or at a later visit within 6 months. Outcomes are described as proportions with confidence intervals. Results: Of 824 patients approached, initial testing offers could not be completed for 120 (15%). These 120 were 62% male, 52% white, and had a median age of 41 (18-64). A repeat offer of testing during the initial visit would have been possible for 99/120 (83%), and 52/99 (53%) were actually offered testing on repeat approach. Of the 21 for whom a testing offer would not have been possible on the initial visit, 14 (67%) had at least one additional visit within 6 months, and 11/14 (79%) could have been offered testing on at least one visit. Overall, a repeat testing offer would have been possible for 110/120 (93%, 95% CI 85-96%). Conclusion: Factors preventing an initial offer of HIV testing in the ED are generally transient. Opportunities for repeat approach during initial or later ED encounters suggest that, given sufficient resources, the ED could succeed in comprehensively screening the population presenting for care. ED screening personnel who are initially unable to offer testing should repeat their attempt. HIV adopt an ''opt-out'' rapid HIV screening model in order to identify HIV infected patients. Previous studies nationwide have shown acceptance rates for HIV screening of 20-90% in emergency departments. However, it is unknown how acceptance rates will vary in a culturally and ethnically diverse urban emergency department.Objectives: To determine the characteristics of patients who accept or refuse ''opt-out'' HIV screening in an urban emergency department.Methods: A self-administered, anonymous survey is administered to ED patients who are 18 to 64 years of age. The questionnaire is administered in English, Russian, Mandarin, and Spanish. Questions include demographic characteristics, HIV risk factors, perception of HIV risk, and acceptance of rapid HIV screening in the emergency department. Results: To date 145 patients responded to our survey. Of the 145, 102 (70.3%) did not accept an HIV test (group 1) in their current ED visit and 43 (29.7%) accepted an HIV test (group 2). The major two reasons given for opting out (i.e., group 1) was ''I do not feel that I am at risk'' (59.8%) and ''I have been tested for HIV before'' (25.5%). There was no difference between the groups in regards to sex (P = 0.737), age (P = 0.351), religious affiliation (P = 0.750), marital status (P = 0.331), language spoken at home (P = 0.211), and whether they had been HIV tested before (73.2% in group 1 and 59.4% in group 2; P = 0.123). However, there was a statistically significant difference with regards to educational level and income. More patients in group 1 (69.0%) and 46.1% in group 2 had less than a college level education (p < 0.05). Similarly, more patients in group 1 (58.3%) and only 34.8% in group 2 had an annual household income of £$25,000 (p < 0.05). Conclusion: In a culturally and ethnically diverse urban emergency department, patients with a lower socioeconomic status and educational level tend to opt out of HIV screening test offered in the ED. No significant difference in acceptance of ED HIV testing was found to date based on primary language spoken at home or religious affiliation Background: Antimicrobial resistance is a problem that affects all emergency departments. Objectives: Our goal was to examine all urinary pathogens and their resistance patterns from urine cultures collected in the emergency department (ED).Methods: This study was performed at an urban/suburban community-teaching hospital with an annual volume of 40,000 visits. Using electronic records, all cases of urine cultures received in 2009 were reviewed for data including type of bacteria, antibiotic resistance, and health care exposure (HCX). HCX was defined as no prior hospitalization within the previous six months, hospitalization within the previous three months, hospitalization within the previous six months, nursing home resident (NH), and presence of an indwelling urinary catheter (UC). An investigator abstracted all data with a second re-abstracting a random 5% for kappa statistics between 0.697 and 1.00. Group Background: Approximately 12-20% of patients treated with epinephrine for anaphylaxis receive a second dose but the risk factors associated with repeat epinephrine use remain poorly defined. Objectives: To determine whether obesity is a risk factor for requiring 2 + epinephrine doses for patients who present to the emergency department (ED) with anaphylaxis due to food allergy or stinging insect hypersensitivity. Methods: We performed a retrospective chart review at four tertiary care hospitals that care for adults and children in New England between the following time periods: Massachusetts General Hospital (1/1/01-12/31/ 06), Brigham and Women's Hospital (1/1/01-12/31/06), Children's Hospital Boston (1/1/01-12/31/06), Hasbro Children's Hospital (1/1/04-12/31/09). We reviewed the medical records of all patients presenting to the ED for food allergy or stinging insect hypersensitivity using ICD9CM codes. We focused on anthropomorphic data and number of epinephrine treatments given before and during the ED visit. Among children, calculated BMIs were classified according to CDC growth indicators as underweight, healthy, overweight, or obese. All patients who presented on or after their 18th birthday were considered adults.Background: Transitions of care are ubiquitous in the emergency department (ED) and inevitably introduce the opportunity for errors. Despite recommendations in the literature, few emergency medicine (EM) residency programs provide formal training or standard process for patient hand-offs. Checklists have been shown to be effective quality improvement measures in inpatient settings and may be a feasible method to improve ED hand-offs. Objectives: To determine if the use of a sign-out checklist improves the accuracy and efficiency of resident sign-out in the ED as measured by reduced omission of key information, communication behaviors, and time to sign-out each patient. Methods: A prospective study of first-and second-year EM and non-EM residents rotating in the ED at an urban academic medical center with an annual ED volume of 55,000. Trained clinical research assistants observed resident sign-out during shift change over a two-week period and completed a 15-point binary observable behavior data collection tool to indicate whether or not key components of sign-out occurred. Time to sign out each patient was recorded. We then created and implemented a computerized sign-out checklist consisting of key elements that should be addressed during transitions of care, and instructed residents to use this during hand-offs. A two-week post-intervention observation phase was conducted using the same data collection tool. Proportions, means, and non-parametric comparison tests were calculated using Stata. Results: One hundred fifteen sign-outs were observed prior to checklist implementation and 72 after; one sign-out was excluded for incompleteness. Significant improvements were seen in four of the measured signout components: inclusion of history of present illness increased by 18% (p < 0.001), likely diagnosis increased by 17% (p = 0.015), disposition status increased by 18% (p < 0.01), and patient/care team awareness of plan increased by 19% (p < 0.01). (Figure 1 ) Time data for 108 sign-outs pre-implementation and 72 post-implementation were available. Seven sign-outs were excluded for incompleteness or spurious values. Mean length of sign out was 83s (95% CI 65 to 100) and 71.7s (95% CI 52 to 92) per patient. Conclusion: Implementation of a checklist improved the transfer of information but did not affect the overall length of time for the sign-out. The Objectives: To determine risk factors associated with adult patients presenting to the ED with cellulitis who fail initial antibiotic therapy and require a change of antibiotics or admission to hospital. Methods: This was a prospective cohort study of patients ‡18 years presenting with cellulitis to one of two tertiary care EDs (combined annual census 120,000). Patients were excluded if they had been treated with antibiotics for the cellulitis prior to presenting to the ED, if they were admitted to hospital, or had an abscess only. Trained research personnel administered a questionnaire at the initial ED visit with telephone follow-up 2 weeks later. Patient characteristics were summarized using descriptive statistics and 95% confidence intervals (CIs) were estimated using standard equations. Backwards stepwise multivariable logistic regression models determined predictor variables independently associated with treatment failure (failed initial antibiotic therapy and required a change of antibiotics or admission to hospital). Results: 598 patients were enrolled, 47 were excluded, and 53 were lost to follow-up. The mean (SD) age was 53.1 (18.4) and 56.4% were male. 497 (99.8%) patients were given antibiotics in the ED. 185 (37.2%) were given oral, 231 (46.5%) were given IV, and 81 (16.3%) patients were given both oral and IV antibiotics. 102 (20.5%) patients had a treatment failure. Fever (temp >38°C) at triage (OR: 4.1, 95% CI: 1.5, 10.7), leg ulcers (OR: 3.1, 95% CI: 1.4, 6.6), edema or lymphedema (OR: 2.5, 95% CI: 1.4, 4.5), and prior cellulitis in the same area (OR: 1.8, 95% CI: 1.1, 2.9) were independently associated with treatment failure. Conclusion: This analysis found four risk factors associated with treatment failure in patients presenting to the ED with cellulitis. These risk factors should be considered when initiating empiric outpatient antibiotic therapy for patients with uncomplicated cellulitis. Use Background: Children presenting for care to a pediatric emergency department (PED) commonly require intravenous catheter (IV) placement. Prior studies report that the average number of sticks to successfully place an IV in children is 2.4. Successfully placing an IV requires identification of appropriate venous access targets. The VeinViewer VisionÒ (VVV) assists with IV placement by projecting a map of subcutaneous veins on the surface of the skin using near infrared light. Objectives: To compare the effectiveness of the VVV versus standard approaches: sight (S) and sight plus palpation (S+P) for identifying peripheral veins for intravenous catheter placement in children treated in a PED. Methods: Experienced pediatric emergency nurses and physicians identified peripheral venous access targets appropriate for intravenous cannulation of a cross-sectional convenience sample of English speaking children aged 2-17 years presenting for treatment of sub-critical injury or illness whose parents provided consent. The clinicians marked the veins with different colored washable marker and counted them on the dorsum of the hand and in the antecubital fossa using the three approaches: S, S+P, and VVV. A trained research assistant photographed each site for independent counting after each marking and recorded demographics and BMI. Counts were validated using independent photographic analyses. Data were entered into SAS 9.2 and analyzed using paired t-tests. Results: 146 patients completed the study. Clinicians were able to identify significantly more veins on the dorsum of the hand using VVV than S alone or S+P, 3.26 (p < 0.0001, CI 2.89-3.64) and 2.31 (p < 0.0001, CI 1.97-2.65), respectively, as well as significantly more veins in the antecubital fossa using VVV than S alone or S+P, 2.62 (p < 0.0001, CI 2.29-2.96) and 1.93 (p < 0.0001, CI 1.62-2.42), respectively. The differences in numbers of veins identified remained significant at p < 0.05 level across all ages, races, and BMIs of children and across clinicians and validating independent photographic analyses. Conclusion: Experienced emergency nurses and physicians were able to identify significantly more venous access targets appropriate for intravenous cannulation in the dorsum of the hand and antecubital fossa of children presenting for treatment in a PED using VVV than the standard approaches of sight or sight plus palpation. An Background: Mental health emergencies have increased over the past two decades, and contribute to the ongoing rise in U.S. ED visit volumes. Although data are limited, there is a general perception that the availability of in-person psychiatric consultation in the ED and of inpatient psychiatric beds is inadequate. Objectives: To examine the availability of in-person psychiatry consultation in a heterogeneous sample of U.S. EDs, and typical delays in transfer of ED patients to an inpatient psychiatric bed. Methods: During 2009-2011, we mailed a survey to all ED directors in a convenience sample of nine US states (AR, CO, GA, HI, MA, MN, OR, VT, and WY). All sites were asked: ''Are psychiatric consults available in-person to the ED?'' (yes/no), with affirmative respondents asked about the typical delay. Sites also were asked about typical ED boarding time between a request for patient transfer and actual patient departure from the ED to an inpatient psychiatric bed. ED characteristics included rural/urban location, visit volume (visits/hour), admission rate, ED staffing, and the proportion of patients without insurance. Data analysis used chi-square tests and multivariable logistic regression. Results: Surveys were collected from 495 (91%) of the 541 EDs, with >80% response rate in every state. Overall, only 30% responded that psychiatric consults were available in-person to the ED. In multivariable logistic regression, ED characteristics independently associated with lack of in-person psychiatric consultation were: location within specific states (eg, AR, GA), rural location, lower visit volume, and lower admission rate. Among the subset of EDs with psychiatric consults available, 48% reported a typical wait time of at least 1 hour. Overall, 54% of EDs reported that the typical time from request to actual patient transfer to an inpatient psychiatric bed was >6 hours, and 47% reported a maximum time in past year of >1 day (median 3 days, IQR 2-4). In a multivariable model, location in MA and higher visit volume were associated with greater odds of a maximum wait time of >1 day. Conclusion: Among 495 surveyed EDs in nine states, only 30% have in-person psychiatric consultants available. Moreover, approximately half of EDs report boarding times of >6 h from request for transfer to actual departure to an inpatient psychiatric bed.Background: Many emergency departments (ED) in the United States use a five tiered triage protocol that has a limited evaluation of psychiatric patients. The Australian Triage Scale (ATS), a psychiatric triage system, has been used throughout Australia and New Zealand since the early 1990s. Objectives: The objective of the study is to compare the current triage system, Emergency Nurses Association (ENA) ESI 5-Tier, to the ATS for the evaluation of the psychiatric patients presenting to the ED. Methods: A convenience sample of patients, 18 years of age and older, presenting with psychiatric complaints at triage were given the ENA triage assessment by the triage nurse. A second triage assessment, performed by a research fellow, included all observed and reported elements using the ATS protocol, a self-assessment survey and an agitation assessment using the Richmond Agitation Sedation Scale (RASS). The study was performed at an inner city Level I trauma center with 60,000 visits per year. The ED was a catchment facility for the police department for psychiatric patients in the area. Patients were excluded if they were unstable, unable to communicate, or had a non-psychiatric complaint. Results were analyzed in SPSS v16. The analysis of data used frequencies, descriptive and ANOVA. Results: A total of 100 patients were enrolled in the study: 72% were African American, 14% Caucasian, 13% Hispanic, 1% Asian, and 1% Indian; 63% of subjects enrolled were male. The patients' level of agitation using RASS showed 59% were alert and calm, 22% were restless and anxious, 6% were agitated, and 5% combative, violent, or dangerous to self. The only significant correlation found was among the ATS and several self assessment questions: ''I feel agitated on a 0 to 10 scale'' (p = 0.031) and ''I feel violent on a 0 to 10 scale'' (p = 0.001). There were no significant correlations found among the ENA triage, RASS scores, and throughput times. Conclusion: The ATS test was more sensitive to the patient declaring that he or she was agitated or felt violent. This shows that this system might be a more useful system in determining the severity of need of psychiatric patients presenting to the ED. Variations Background: Hemoglobin-based oxygen carriers (HBOCs) have been evaluated for small-volume resuscitation of hemorrhagic shock due to their oxygen carrying capability, but have found limited utility due to vasoactive side-effects from nitric oxide (NO) scavenging. Objectives: To define an optimal HBOC dosing strategy and evaluate the effect of an added NO donor, we use a prehospital swine polytrauma model to compare the effect of low-vs. moderate-volume HBOC resuscitation with and without nitroglycerin (NTG) co-infusion as an NO donor. We hypothesize that survival time will improve with moderate resuscitation and that an NO donor will add additional benefit. Methods: Survival time was compared in groups (n = 7) of anesthetized swine subjected to simultaneous traumatic brain injury and uncontrolled hemorrhagic shock by aortic tear. Animals received one of three different resuscitation fluids: lactated Ringers (LR), HBOC, or vasoattenuated HBOC with NTG co-infusion. For comparison, these fluids were given in a severely limited fashion (SL) as one bolus every 30 minutes up to four total, or a moderately limited fashion (ML) as one bolus every 15 minutes up to seven total, to maintain mean arterial pressure ‡60 mmHg. Comparison of resuscitation regimen and fluid type on survival time was made using two-way ANOVA with interaction and Tukey Kramer adjustment for individual comparisons. Results: There was a significant interaction between fluid regimen and resuscitation fluid type (ANOVA, p = 0.011) indicating that the response to SL or ML resuscitation was fluid type-dependent. Within the LR and HBOC+NTG groups, survival time (mean, 95%CI) was longer for SL, 323.5 min ( injuries are common and result from many different mechanisms of injury (MOI). Knowing common fracture locations may help in diagnosis and treatment, especially in patients presenting with distracting injuries that may mask the pain of a radius fracture.Objectives: We set out to determine the incidence of radius fracture locations among patients presenting to an urban emergency department (ED).Background: Carbon monoxide (CO) is the leading cause of poisoning morbidity and mortality in the United States. Standard treatment includes supplemental oxygen and supportive care. The utility of hyperbaric oxygen (HBO) therapy has been challenged by a recent Cochrane review. Hypothermia may mitigate delayed neurotoxic effects after CO poisoning as it is effective in cardiac arrest patients with similar neuropathology. Objectives: To develop a rat model of acute and delayed severe CO toxicity as measured by behavioral deficits and cell necrosis in post-sacrifice brain tissue.Methods: A total of 28 rats were used for model development; variable concentrations of CO and exposure times were compared to achieve severe toxicity. For the protocol, six senescent Long Evans rats were exposed to 2,000 ppm of CO for 20 minutes then 1,500 ppm for 160 minutes, followed by three successive dives at 30,000 ppm with an endpoint of apnea or seizure; there was a brief interlude between dives for recovery. A modified Katz assessment tool was used to assess behavior at baseline and 2 hours, 1 day, and 1, 2, 3, 4, 5, and 6 weeks post-exposure. Following this, the brains were transcardially fixed with formalin, and 5 lm sagittal slices were embedded in paraffin and stained with hematoxylin and eosin. A pathologist quantified the percentage of necrotic cells in the cortex, hippocampus (pyramidal cells), caudoputamen, cerebellum (Purkinje cells), dentate gyrus, and thalamus of each brain to the nearest 10% from 10 randomly selected high power fields (400x Background: There remains controversy about the cardiotoxic effects of droperidol, and in particular the risk of QT prolongation and Torsades des Pointes (TdP).Objectives: This study aimed to investigate the cardiac and haemodynamic effects of high-dose parenteral droperidol for sedation of acute behavioural disturbance (ABD) in the emergency department (ED). Methods: A standardised intramuscular (IM) protocol for the sedation of ED patients with ABD was instituted as part of a prospective observational safety study in four regional and metropolitan EDs. Patients with ABD were given an initial dose of 10 mg droperidol followed by an additional dose of 10 mg after 15 min if required. Inclusion criteria were patients requiring physical restraint and parenteral sedation. The primary outcome was the proportion of patients who have a prolonged QT interval on ECG. The QT interval was plotted against the heart rate (HR) on the QT nomogram to determine if the QT was abnormal. Secondary outcomes were frequency of hypotension and cardiac arrhythmias. Results: ECGs were available from 273 of 424 patients with ABD given droperidol. The median dose was 10 mg (IQR 10-15 mg; range: 5 to 30 mg). The median age was 33 years (rnge: 16 to 92) and 163 were males (60%). A total of four (1%) QT-HR pairs were above the ''at-risk'' line on the QT nomogram. Transient hypotension occurred in 8 (3%), and no arrhythmias were detected.Conclusion: Droperidol appears to be safe when used for rapid sedation in the dose range of 5 to 30 mg. It rarely causes hypotension or QT prolongation. Blood Background: Soldiers and law enforcement agents are repeatedly exposed to blast events in the course of carrying out their duties during training and combat operations. Little data exist on the effect of this exposure on the physiological function of the human body. Both military and law enforcement dynamic entry personnel, ''Breachers'', began expressing sensitivity to the risk of injury as a result of multiple blast exposures. Breachers apply explosives as a means of gaining access to barricaded or hardened structures. These specialists can be exposed to as many as a dozen lead-encased charges per day during training exercises.Objectives: This observational study was performed by the Breacher Injury Consortium to determine the effect of short-term exposure to blasts by Breachers on whole blood lead levels (BLLs) and zinc protoporphyrin levels (ZPPLs). Methods: Two 2-week Basic Breaching training classes were conducted by the United States Marine Corps' Weapons Training Battalion Dynamic Entry School. Each class included 14 students and up to three instructors, with six non-breaching Marines serving as a control group. To evaluate for lead exposure, venous blood samples were acquired from study participants on the weekend prior and following training in the first training class, whereas the second training class had an additional level performed mid-training. BLLs and ZPPLs were measured in a whole-blood sample using the furnace atomic absorption method and hematofuorimeter method, respectively. Results: Analysis of these blast injury data indicated students demonstrated significantly increased BLLs post-explosion (mean = 7 mcg/dL, SD 2.42, p < 0.001) compared to pre-training (mean = 3 mcg/dL, SD 1.60) and control subjects (mean = 3 mcg/dL, SD 2.73, p < 0.001). Instructors also demonstrated significantly increased BLLS post explosion (mean = 6 mcg/dL, SD 1.95, p < 0.02) compared to pre-training (mean = 3 mcg/ dL, SD 1.14) and control subjects (mean = 3 mcg/dL, SD 2.73, p < 0.001). Student and instructor ZPPLs were not significantly different in post-training compared to pretraining or control groups. Conclusion: The observation from this study that Breachers are at risk of mild increases in BLLs support the need for further investigation into the role of lead following repeated blast exposure with munitions encased in lead. Direct Observation of the Background: Notification of a patient's death to family members represents a challenging and stressful task for emergency physicians. Complex communication skills such as those required for breaking bad news (BBN) are conventionally taught with small-group and other interactive learning formats. We developed a de novo multi-media web-based learning (WBL) module of curriculum content for a standardized patient interaction (SPI) for senior medical students during their emergency medicine rotation.Objectives: We proposed that use of an asynchronous WBL module would result in students' skill acquisition for breaking bad news. Methods: We tracked module utilization and performance on the SPI to determine whether students accessed the materials and if they were able to demonstrate proficiency in its application. Performance on the SPI was assessed utilizing a BBN-specific content instrument developed from the GRIEV_ING mnemonic as well as a previously validated instrument for assessing communication skills.Results: Three hundred seventy-two students were enrolled in the BBN curriculum. There was a 92% completion rate of the WBL module despite students being given the option to utilize review articles alone for preparation. Students interacted with the activities within the module as evidenced by a mean number of mouse clicks of 42.1 (SD 21.6). Overall SPI scores were 94.5%, (SD 4.4) with content checklist scores of 92.8% (SD 5.7) and interpersonal communication scores 97.9% (SD 4.7). Five students had failing content scores (<75%) on the SPI and had a mean number of clicks of 30.8 (SD 28.2), which is not significantly lower than those passing (p = 0.21). Students in the first year of WBL deployment completed self-confidence assessments which showed significant increases in confidence (2.86 toBackground: Pelvis ultrasonography (US) is a useful bedside tool for the evaluation of women with suspected pelvic pathology. While pelvic US is often performed by the radiology department, it often lacks clinical correlation and takes more time than bedside US in the ED. This was a prospective observational study comparing the ED length of stay (LOS) of patients receiving ED US versus those receiving radiology US. Objectives: The primary objective was to measure the difference in ED LOS. The secondary objectives were to 1) assess the role of pregnancy status, OB/GYN consult in the ED, and disposition, in influencing the ED LOS; and 2) to assess the safety of ED US by looking at patient return to the ED within 2 weeks and whether that led to an alternative diagnosis.Methods: Subjects were women over 13 years old presenting with a GI or GU complaint, and who received either an ED or radiology US. A t-test was used for the primary objective, and linear regression to test the secondary objective. Odds ratios were performed to assess for interaction between these factors and type of ultrasound. Subgroup analyses were performed if significant interaction was detected. Results: Forty-eight patients received an ED US and 85 patients received a radiology US. Subjects receiving an ED US spent 162 minutes less in the ED (p < 0.001). In multivariate analysis, even when controlling for pregnancy status, OB/GYN consult, and disposition, patients who received an ED US had a LOS reduction of 108 minutes (p < 0.05). In odds ratio analysis, patients who were pregnant were 11 times more likely to have received an ED US (p < 0.05). Patients who received an OB/GYN consult in the ED were five times more likely to receive a radiology US (p < 0.05). There was no association between type of US and disposition. In subgroup analyses, pregnant and non-pregnant patients who received an ED US still had a LOS reduction of 140 minutes (p < 0.01) and 112 minutes (p < 0.05), respectively. Sample sizes were inadequate for subgroup analysis for subjects who had OB/GYN consults. In patients who did not receive an OB/GYN consult, those who received an ED US had a LOS reduction of 139 minutes (p < 0.001). Finally, 10% of subjects returned within two weeks, but none led to an alternative diagnosis. Conclusion: Even when controlling for disposition, OB/GYN consultation, and pregnancy status, patients who received an ED US had a statistically and clinically significant reduction in their ED LOS. In addition, ED US is safe and accurate. Background: Although early surface cooling of burns reduces pain and depth of injury, there are concerns that cooling of large burns may result in hypothermia and worse outcomes. In contrast, controlled mild hypothermia improves outcomes after cardiac arrest and traumatic burn injury. Objectives: The authors hypothesized that controlled mild hypothermia would prolong survival in a fluidresuscitated rat model of large scald burns. Methods: Forty Sprague-Dawley rats (250-300 g) were anesthetized with 40 mg/kg intramuscular ketamine and 5 mg/kg xylazine, with supplemental inhalational isoflurane as needed. A single full-thickness scald burn covering 40% of the total body surface area was created per rat using a Mason-Walker template placed in boiling water (100 deg C) for a period of 10 seconds. The rats were randomized to hypothermia (n = 20) and nonhypothermia (n = 20). Core body temperature was continuously monitored with a rectal temperature probe. Hypothermia was induced through intraperitoneal injection of cooled (4 deg C) saline. The core temperature was reduced by 2 deg C and maintained for a period of 2 hours, applying an ice or heat pack when necessary. The rats were then rewarmed back to baseline temperature. In the control group, room temperature saline was injected into the intraperitoneal cavity and core temperature was maintained using a heating pad as needed. The rats were monitored until death or for a period of 7 days, whichever was greater. The primary outcome was death. The difference in survival was determined using a Kaplan-Meier analysis or log rank test. Results: The mean core temperatures were 32.5 deg C for the hypothermic group and 35.6 deg C for the normothermic group. The mean survival times were 124 hours for the hypothermic group (95% confidence interval [CI] = 98 to 150) and 100 hours for the normothermic group (95% CI = 68 to 132). The seven-day survival rates in the hypothermic and non-hypothermic groups were 67% and 53%. These differences were not significant, P = 0.33 for both comparisons. Conclusion: Induction of brief mild hypothermia increases but does not significantly prolong survival in a resuscitated rat model of large scald burns. Serum Objectives: We sought to determine levels of serum mtDNA in ED patients with sepsis compared to controls and the association between mtDNA and both inflammation and severity of illness among patients with sepsis. Methods: Prospective observational study of patients presenting to one of three large, urban, tertiary care EDs. Inclusion criteria: 1) Septic shock: suspected infection, two or more systemic inflammatory response (SIRS) criteria, and systolic blood pressure (SBP) <90 mmHg despite a fluid bolus; 2) Sepsis: suspected infection, two or more SIRS criteria, and SBP >90 mmHg; and 3) Control: ED patients without suspected infection, no SIRS criteria, and SBP >90 mmHg. Three mtDNAs (COX-III, cytochrome B, and NADH) were measured using real-time quantitative PCR from serum drawn at enrollment. IL-6 and IL-10 were measured using a Bio-Plex suspension array system. Baseline characteristics, IL-6, IL-10, and mtDNAs were compared using one way ANOVA or Fisher exact test, as appropriate. Correlations between mtDNAs and IL-6/IL-10 were determined using Spearman's rank. Linear regression models were constructed using SOFA score as the dependent variable, and each mtDNA as the variable of interest in an independent model. A Bonferroni adjustment was made for multiple comparisons.Results: Of 93 patients, 24 were controls, 29 had sepsis, and 40 had septic shock. We found no significant difference in any serum mtDNAs among the cohorts (p = 0.14 to 0.30). All mtDNAs showed a small but significant negative correlation with IL-6 and IL-10 (q = )0.24 to )0.35). Among patients with sepsis or septic shock (n = 69), we found a small but significant negative association between mtDNA and SOFA score, most clearly with cytochrome b (p = 0.001). Conclusion: We found no difference in serum mtDNAs between patients with sepsis, septic shock, and controls. Serum mtDNAs were negatively associated with inflammation and severity of illness, suggesting that as opposed to trauma, serum mtDNA does not significantly contribute to the pathophysiology of the sepsis syndromes. Methods: We consecutively enrolled ED patients ‡18 years of age who met anaphylaxis diagnostic criteria from April 2008 to July 2011 at a tertiary center with 72,000 annual visits. We collected data on antihypertensive medications, suspected causes, signs and symptoms, ED management, and disposition. Markers of severe anaphylaxis were defined as 1) intubation, 2) hospitalization (ICU or floor), and 3) signs and symptoms involving ‡3 organ systems. Antihypertensive medications evaluated included beta-blockers, angiotensin converting enzyme (ACE) inhibitors, and calcium channel blockers (CCB). We conducted univariate and multivariate analyses to measure the association between antihypertensive medications and markers of severe anaphylaxis. Because previous studies demonstrated an association between age and the suspected cause of the reaction with anaphylaxis severity, we adjusted for these known confounders in multivariate analyses. We report associations as odds ratios (ORs) and corresponding 95% CIs with p-values. Results: Among 302 patients with anaphylaxis, median age (IQR) was 44 (31-58) and 204 (67.5%) were female. Eight (2.7%) patients were intubated, 57 (19%) required hospitalization, and 139 (46%) had ‡3 system involvement. Forty-nine (16%) were on beta-blockers, 34 (11%) on ACE inhibitors, and 22 (7.3%) on CCB. In univariate analysis, ACE inhibitors were associated with intubation and ‡3 system involvement and CCB were associated with hospital admission. In multivariate analysis, after adjusting for age and suspected cause, ACE inhibitors remained associated with hospital admission and beta-blockers remained associated with both hospital admission and ‡3 system involvement. Conclusion: In ED patients, beta-blocker and ACE inhibitor use may predict increased anaphylaxis severity independent of age and suspected cause of the anaphylactic reaction. Background: Advanced Cardiac Life Support (ACLS) resuscitation requires rapid assessment and intervention. Some skills like patient assessment, quality CPR, defibrillation, and medication administration require provider confidence to be performed quickly and correctly. It is unclear, however, whether high-fidelity simulation can improve confidence with a multidisciplinary group of providers with high levels of clinical experience. Objectives: The purpose of the study was to test the hypothesis that providers undergoing high-fidelity simulation of cardiopulmonary arrest scenarios will express greater confidence. Methods: This was a prospective cohort study conducted at an urban Level I trauma center from January to October 2011 with a convenience sample of registered (RN) and license practical nurses, nurse practitioners, resident physicians, and physician assistants who agreed to participate in 2/4 high-fidelity simulation (Laerdal 3G) sessions of cardiopulmonary arrest scenarios about 3 months apart. Demographics were recorded. Providers completed a validated preand post-test five-point Likert scale confidence measurement tool before and after each session that ranged from not at all confident (1) to very confident (5) in recognizing signs and symptoms of, appropriately intervening in, and evaluating intervention effectiveness in cardiac and respiratory arrests. Descriptive statistics, paired t-tests, and ANOVA were used for data analysis. Sensitivity testing evaluated subjects who completed their second session at 6 months rather than 3 months. Results: Sixty-five subjects completed consent, 39 completed one session, and 23 completed at least two Background: Prehospital studies have focused on the effect of health care provider gender on patient satisfaction. We know of no study that has assessed patient satisfication with patient and prehospital provider gender. Some studies have shown higher patient satisfaction rates when cared for by a female health care provider.Objectives: To determine the effect of EMS provider gender on patient satisfaction with prehospital care. Methods: A convenience sampling of all adult patients brought in to our ED, an urban Level I trauma center by ambulance. A trained research associate (RA) stationed at triage conducted a survey using Press Ganey EMS patient satisfaction questions. There were thirteen questions evaluating prehospital provider skills such as driving, courtesy, listening, medical care, and communication. Each skill was assigned a point value between one and five; the higher the value the better the skill was performed. The patient's ambulance care report was copied for additional data extraction.Results: A total of 225 surveys were done. Average patient age was 71, and 54% were female. Scores for all questions totaled 65 (mean 62.63 ± 5.1). Prehospital providers pairings were: male-male (n = 141), male-female (n = 71), and female-female (n = 13). There were no statistically significant differences in scores between our pairings (mean scores for male:male 19.3, male:female 19.1, and female:female 19.2; p = 0.73). We found nonstatistical differences in satisfaction scores based on the gender of the EMT in the back of the ambulance: males had a mean score of 62.7 and females had a mean score of 62.6 (p = 0.91). We examined gender concordance by comparing gender of the patient to the gender of the prehospital provider and found that male-male had a mean score of 62.8, female-female 62.2, and when the patient and prehospital provider gender did not match, 62.5 (p = 0.71). Conclusion: We found no effect of gender difference on patient satisfaction with prehospital care. We also found that overall, patients are very satisfied with their prehospital care. Objectives: We set out to determine the sensitivity and specificity of EPs in determining the presence of recently ingested tablets or tablet fragments.Methods: This was a prospective volunteer study at an academic emergency department. Healthy volunteers were enrolled and kept NPO for 6 hours prior to tablet ingestion. Over 10 minutes subjects ingested 800 ml of water and 30 tablets. Ultrasounds video clips were performed prior to any tablet ingestion, after drinking 200 ml of water, after 10 tablets, after 20 tablets, after 30 tablets, and 60 minutes after the final tablet ingestion yielding six clips per volunteer. All video clips were randomized and shown to three EPs who were fellowship-trained in emergency ultrasound. EPs recorded the presence or absence of tablets.Results: Ten volunteers underwent the pill ingestion protocol and sixty clips were collected. Results for all cases and each rater are reported in the table. Overall there was moderate agreement between raters (kappa = 0.42). Sub-group analysis of 10, 20, or 30 pills did not show any significant improvement in sensitivity and specificity.Conclusion: Ultrasound has moderate specificity but poor sensitivity for identification of tablet ingestion. These results imply that point-of-care ultrasound has limited utility in diagnosing large tablet ingestion. Background: Intravenous fat emulsion (IFE) therapy is a novel treatment that has been used to reverse the acute toxicity of some xenobiotics with varied success. US Poison Control Centers (PCC) are recommending this therapy for clinical use, but data regarding these recommendations are lacking.Objectives: To determine how US PCC have incorporated IFE as a treatment strategy for poisoning. Methods: A closed-format multiple-choice survey instrument was developed, piloted, revised, and then sent electronically to every medical director of an accredited US PCC using SurveyMonkey in March 2011; addresses were obtained from the AAPCC listserv, participation was voluntary and remained anonymous; three reminder invitations were sent during the study period. Data were analyzed using descriptive statistics.Results: Forty-five of 57 (79%) PCC medical directors completed the survey. All 45 respondents felt that IFE therapy played a role in the acute overdose setting. Thirty (67%) PCC have a protocol for IFE therapy: 29 (97%) recommend an initial bolus of 1.5 mL/kg of a 20% lipid emulsion, 28 (93%) PCC recommend an infusion of lipids, and 27/28 PCC recommend an initial infusion rate of 0.25 mL/kg of a 20% lipid emulsion. Thirty-three (73%) felt that IFE had no clinically significant side effects at a bolus dose of 1.5 mL/kg (20% emulsion). Forty-four directors (98%) felt that the ''lipid sink'' mechanism contributed to the clinical effects of IFE therapy, but 26 (58%) felt that there was a yet undiscovered mechanism that likely contributed as well. In a scenario with cardiac arrest due to a single xenobiotic, directors stated that their center would always or often recommend IFE after overdose of bupivicaine (43; 96%), verapamil (36; 80%), amitriptyline (31; 69%), or an unknown xenobiotic (12; 27%). In a scenario with significant hemodynamic instability due to a single xenobiotic, directors stated that their PCC would always or often recommend IFE after overdose of bupivicaine (40; 89%), verapamil (28; 62%), amitriptyline (25; 56%), or an unknown xenobiotic (8; 18%).Conclusion: IFE therapy is being recommended by US PCC. Protocols and dosing regimens are nearly uniform. Most directors feel that IFE is safe but are more likely to recommend IFE in patients with cardiac arrest than in patients with severe hemodynamic compromise. Further research is warranted. levels drawn at 4 hours or more (240 mcg/ml at 5 hours, 198 mcg ⁄ ml at 4 hours, respectively). NPV for toxic ingestion of an initial APAP level less than 100 mcg/ml was 97.8% (95% CI 92.3-99.7%).Conclusion: An APAP level of less than 100 mcg/ml drawn less than 4 hours after ingestion had a high NPV for excluding toxic ingestion. However, the authors would not recommend reliance on levels obtained under 4 hours to exclude toxicity as the potential for up to 6.7% false negative results is considered unacceptable. Background: Genetic variations in the mu-opioid receptor gene (OPRM1) mediate individual differences in response to pain and addiction.Objectives: To study whether the common A118G (rs1799971) mu-opioid receptor single nucleotide polymorphism (SNP) or the alternative splicing SNP of OPRM1 (rs2075572) was associated with overdose severity, we assessed allele frequencies of each including associations with clinical severity in patients presenting to the emergency department (ED) with acute drug overdose. Methods: In an observational cohort study at an urban teaching hospital, we evaluated consecutive adult ED patients presenting with suspected acute drug overdose over a 12-month period for whom discarded blood samples were available for analysis. Specimens were linked with clinical variables (demographics, urine toxicology screens, clinical outcomes) then de-identified prior to genetic SNP analysis. In-hospital severe outcomes were defined as either respiratory arrest (RA, defined by mechanical ventilation) or cardiac arrest (CA, defined by loss of pulse). Blinded Taqman genotyping (Applied Biosystems) of the SNPs were performed after standard DNA purification (Qiagen) and whole genome amplification (Qiagen REPLI-g). The PLINK 1.07 genetic association analysis program was used to verify SNP data quality, test for departure from Hardy-Weinberg equilibrium, and test individual SNPs for statistical association. Results: We evaluated 178 patients (37% female, mean age 41.2) who overall suffered 13 RAs and 3 CAs (of whom 2 died). Urine toxicology was positive in 33%, of which there were positives for 32 benzodiazepines, 26 cocaine, 21 opiates, 13 methadone, and 6 barbiturates. All genotypes examined conformed to Hardy-Weinberg equilibrium. The 118G allele was associated with 2.5fold increased odds of CA/RA (OR 2.5, p < 0.05). The rs2075572 mutant allele was not associated with CA/ RA. Conclusion: These data suggest that the 118G mutant allele of the OPRM1 gene is associated with worse clinical severity in patients with acute drug overdose. The findings add to the growing body of evidence linking the A118G SNP with clinical outcome and raise the question as to whether the A118G SNP may be a potential target for personalized medical prescribing practices with regard to behavioral/physiologic overdose vulnerability.