key: cord-0009681-7s38gfq7 authors: nan title: SAEM Annual Meeting Abstracts date: 2013-04-23 journal: Acad Emerg Med DOI: 10.1111/acem.12115 sha: b069ec40893062f831f821b112a371544ba04579 doc_id: 9681 cord_uid: 7s38gfq7 nan Results: One hundred nineteen subjects received both regular and ULDCTs. Mean age was 43(+/-14) years old, 43% were female. Mean BMI was 30.1(+/-7.8), with 63% receiving high BMI ULDCT and 37% receiving low BMI ULDCT. On SCT there were 58.0% with kidney stone; 29.0% of these were large (>5 mm). Agreement for presence of kidney stones was 94.1% (93.0-95.1), with a sensitivity of 92.8% (83. 2-97.3) , specificity of 96.0% (85. 1-99.3) , and kappa of 0.88 (0.79-0.97). There were five stones not seen on ULDCT; all were 5 mm or less in size and distally located. Two patients had acutely important alternate findings on SCT (both diverticulitis), and both of these were also identified on ULDCT. The mean DLP for regular dose CTs was 778(381) mGy-cm and 103(37)mGy-cm for ULDCTs, representing a mean dose reduction of 87%. Conclusion: ULDCT with mean effective dose comparable to plain radiography shows excellent performance in the detection and characterization of kidney stones in ED patients, with substantial dose reduction. All large stones and acutely important alternative findings were detected. The Objectives: To demonstrate the effect of operator experience on test performance characteristics by stratifying ultrasound scans performed by novice and experienced sonologists for diagnosing appendicitis in children. The study involved a prospective observational convenience sample of children in the ED with suspected appendicitis requiring imaging evaluation adhering to the STARD criteria. Outcomes were determined by operative or pathology report in those who had appendicitis, and 3-week phone follow-up in those patients who were nonoperative. The effect of operator experience was examined by stratifying test performance characteristics by level of experience. Novice scans were performed by sonologists who had never diagnosed appendicitis with ultrasound prior to study start and were compared to scans performed by a sonologist with 10 years experience in evaluating appendicitis Objectives: To report the test characteristics of BUS for the diagnosis of pediatric intussusception at a single institution. Methods: Inclusion criteria were: 1) patients 0-18 years old seen in the pediatric emergency department (ED) with a clinical presentation concerning for intussusceptions, 2) BUS was performed to identify intussusception and bedside impression documented in the medical record, 3) a "formal" diagnostic study (such as computed tomography, ultrasound, or barium enema) was performed by the radiology department after BUS was completed. Electronic medical record and ED BUS archive were screened to retrospectively identify patient visits between 1/1/09 to 10/3/12 during which a BUS was performed. These records were then reviewed to identify patients for inclusion in the study. All emergency physicians who performed the BUS had undergone a minimum of 1-hour didactic training on the use of BUS to diagnose pediatric intussusception. Results: A total of 1631 charts were reviewed, with 49 meeting inclusion criteria. Five of those were later excluded for incomplete documentation or lack of saved BUS images. Of the 44 patients ultimately included, 30 were male (68%). Prevalence of intussusception was 23%. Mean age of the subjects was 31 months. There were a total of 12 positive BUS for intussusception, and 32 negative studies. There was no difference in demographic characteristics between the patients with and without intussusception. BUS was 100% sensitive (95% CI 66-100%) and 94% specific (95% CI 79-99%) for detection of pediatric intussusception compared to radiology study results. There were two false positive (one of which was determined to be transient intussusception after review of the bedside images with radiologist) and no false negative BUS studies. Specificity of BUS increased to 97% if only one false positive was taken into account. Conclusion: BUS is an accurate and safe means of diagnosing acute intussusception in pediatric patients. Further study might be indicated to confirm such benefits. Mean accuracy was assessed at the 150 and 300 total exam benchmarks. For five common US applications, we assessed accuracy at the 25 and 50 exam benchmarks. We then compared the accuracies over the range of exams using two-sample t-tests. institutions who submitted a total of 32,988 exams. At the 150 total exam benchmark, accuracy was 92% among 93 subjects who achieved at least this experiential level. At the 300 total exam benchmark, accuracy was 92% among 82 subjects who achieved this level. The difference between the 150 and 300 total exam benchmarks was not significantly different (p=0.31). See the table for accuracy by application. Conclusion: Completion of the ACEP minimum US training benchmarks predicts reasonable accuracy among EM residents within the PRN. For gall bladder, renal, and FAST applications, accuracy did not improve beyond the 25 exam benchmark. For the perhaps more challenging aorta and first trimester pelvic applications, accuracy did improve at the 50 exam benchmark. Our results provide the first multicenter, multiple application, longitudinal evidence for the widespread implementation of the ACEP minimum US training benchmarks of 150-250 exams overall and 25-50 exams per US application. Objectives: The objective of this study was to define plateau points for educational learning curves of exam experience versus image quality for individual exam types using an existing database of US reviews. Our hypothesis is that skill in image acquisition increases with experience and then reaches a plateau where further experience has little effect on skill. Methods: US examinations performed across four emergency departments underwent standardized expert review with categorization as either poor quality images that limited exam interpretation or adequate quality for interpretation. Exams were excluded if performed by sonographers whose US training began prior to the implementation of our electronic tracking system. Sonographers included in this study underwent standardized education including didactics and hands-on instruction. Statistical analysis was performed using a generalized estimating equation model to predict performance based on exam experience while controlling for confounders to generate learning curves for nine exam types. A plateau point was calculated as the point at which the growth rate in a curve became sufficiently small (<0.001). A weighted average of plateau points for a given exam type was taken and Bonferroni adjusted confidence intervals calculated. Conclusion: Plateaus in the educational learning curve vary by exam, but on average occur around 100 exams. While learning plateaus do not define proficiency, they should be considered when structuring emergency US training requirements. Objectives: To determine the correlation between hydration status and sonographic IVC measurements in Division I intercollegiate football players. Methods: A prospective cohort sample of Division I intercollegiate football players in preseason training camp was recruited while in the training room before practice. All football players on the active roster who were > 18 years were eligible to participate in the study. A short questionnaire was administered to the players regarding current hydration practices and prior dehydration illness. Sonographic IVC measurements were obtained during inspiration and expiration. After an approximately 3-hour practice with moderate-to-high levels of exertion in high ambient temperatures (range 87-100 degrees F), players returned to the training room for a follow-up IVC measurements. Player weights were recorded in the locker room before and after practice. Five players had pre-and post-practice measurements recorded on more than one practice date. One player was lost to follow-up and did not return to the training room after practice for repeat IVC measurements. Results: We enrolled 22 subjects for 27 pre/post-practice measurements. Mean weight loss was 1.5% (SD 0.77) and mean caval index was 30.3 (SD 19.4 Objectives: The objective of this study was to determine if the cardiac PSL ultrasound view is altered by a change in the patient's position. Our hypothesis was that PSL images would be improved in the LLD as compared to the supine position. Methods: This was a prospective observational study performed at a university affiliated emergency medicine residency program. All ultrasound scans were performed on healthy volunteers by one of three members of the study team. PSL ultrasound images were obtained by using a Zonare z.one ultra sp ultrasound machine with a P4-1c phased array transducer, in both the supine and the LLD position in each volunteer. The same probe position, depth, and gain settings were used for both body positions. Images were reviewed at a later date in a blinded manner by three experienced sonographers and scored on a 1-10 scale, with 10 representing the best possible score, in twelve different categories. One category was of overall image quality; the other eleven pertained to how well various cardiac structures could be visualized. Scores from the three reviewers were averaged for each category and compared between supine and the LLD positions. Paired t-tests were used to calculate 95% CIs for mean differences. Results: We collected data from 40 volunteers. Selected image scores are shown in the figure. The mean overall quality score was 0.3 higher for LLD than for supine [95% CI -0.2 to +0.8], which was not statistically significant. Among individual structures, the right ventricle (mean difference 1.1, 95% CI 0.5 to 1.7) and the intraventricular septum (mean difference 1.1, 95%CI 0.7 to 2.0) were seen well in this view (scores > 5) and statistically better in the LLD position, and none were statistically better in the supine position. Conclusion: Left lateral decubitus positioning does not improve overall image quality for the PSL cardiac ultrasound view. It does improve visualization of the right ventricle and intraventricular septum. is a dynamic process characterized by transitions between states of shock severity. The ability to predict these transitions would inform treatment decisions, but models that quantitatively describe them do not exist. Objectives: To develop a discrete time Markov model for resuscitation using physiological markers and parameters in a porcine hemorrhagic shock model. Our hypothesis is that this model will predict time-dependent transitions in shock severity during fluid resuscitation. Methods: This is a secondary analysis of data from a 180-minute period of fluid resuscitation in a porcine model with free hemorrhage due to an aortic tear. After being bled to a mean arterial pressure (MAP) of 30 mm Hg, 14 anesthetized swine (20.3 AE 1.7 kg) received two 10 mL/kg IV boluses of Hextend at 0 and 40 minutes. At 30-minute intervals, animals were classified into one of four possible health states according to mixed venous lactate and MAP (table). "Dead" was an absorbing state, but transitions between states were otherwise permitted. Transition probabilities were estimated using Markov chain Monte Carlo simulation with a multinomial likelihood and a noninformative prior. Given the observed states at the onset of resuscitation, we calculated the predictive distributions of animals in each state at all subsequent time points. Predictive proportions with Objectives: To evaluate the association of both ED length of visit for admitted patients and rate of LWBS with the outcome of 30-day mortality rate of AMI while controlling for other differences that may influence patient outcomes. Methods: Using merged 2008 data from Centers for Medicare & Medicaid Services and University HealthSystem Consortium (UHC) we examined Medicare patients ages 65 and older with a principal diagnosis of AMI from 23 hospitals across the US. We limited the cohort to patients admitted to UHC facilities. The facilities were categorized into quartiles for each of the two quality measures. Using a multivariate logistic regression model to account for clustering, we examined the association of the two quality measures as predictor variables for the outcome of 30-day mortality after adjustment for comorbidities, age, sex, and race. Results: 3825 patients with an average age of 77.0 (SD 7.9) and 53.5% male were included in the analysis. The average ED length of visit across the four quartiles was 5.40, 6.95, 8.41, and 12.22 hours, while the proportion of LWBS was 1.44%, 2.94%, 5.07%, and 9.81%. The crude 30-day AMI mortality rate for each quartile for ED length of visit of admitted patients was 10.7% for the first (shortest length of visit) quartile, 10.0% for the second quartile, 9.2% for the third quartile, and 12.3% for the fourth (longest length of visit) quartile. When comparing the third quartile to the fourth (longest length of visit) for admitted patients quartile, the risk-adjusted OR of 30-day mortality was 0.74 (95% CI = 0.57 to 0.95). None of the other quartile comparisons for mean ED length of visit, nor any of the LWBS quartiles were significantly different. Conclusion: There was a 26% lower odds of 30-day AMI death among patients admitted to hospitals in the third quartile of ED length of visit for admitted patients measure compared to hospitals who were in the fourth or longest quartile. Further studies should be done to evaluate why hospitals in the third quartile have a significantly lower AMI mortality than those in the fourth quartile. Objectives: We describe a novel educational protocol to train physicians in rural Haiti in point of care cardiopulmonary US (CPUS) and analyze how focused echocardiography and pulmonary ultrasound affects patient diagnosis and management. Methods: Design: This is a prospective observational study evaluating the effect of an US educational course on physician care plans. Setting and Subjects: Seven generalist physicians in a large, rural referral hospital in Haiti enrolled a convenience sample of adult and pediatric patients presenting with dyspnea. Intervention: The study intervention was a 3-week US training program in basic emergency US topics, including focused CPUS to aid in accurate diagnosis of common causes of dyspnea (table) . Training methods included lecture didactics, video review, and proctored patient exams. Measurement: Clinicians reported a preliminary diagnosis and management plan based on history, physical exam, and available ancillary tests before performing a focused CPUS exam. After CPUS, physicians reported their diagnosis and management plan. Pre-US and post-US diagnosis and management plans were compared using descriptive statistics to calculate measures of interest. Results: One hundred seventeen patients (88 adult, 29 pediatric) were enrolled over 6 months. Past medical histories included hypertension in 22 (18.8%), tuberculosis in 5 (4.3%), HIV in 5 (4.3%), tobacco use in 9 (7. 7%) , and postpartum in 8 (6.8%). CPUS narrowed or changed the differential diagnosis in 21 (18%), and broadened the diagnosis in 48 patients (41%) identifying pertinent associated pathology such as underlying mitral valve disease, pericardial or pleural effusions. CPUS resulted in a change in management plan in 23 (19.7%) of cases. (figure). Conclusion: Bedside, clinician-performed CPUS changes diagnosis and management plans in patients with dyspnea, and can be accurately performed by a generalist physician after focused training by an emergency ultrasound specialist. CPUS in the developing world should focus on identification of heart failure, effusions, valvular disease, and alveolar parenchymal disease. Background: Dyspnea is a primary symptom of acute heart failure (AHF) and has been proposed as an outcome measure for AHF clinical trials. Traditionally, dyspnea severity (DS) is measured by asking the subject to rate the symptom on a Likert or visual analog scale (VAS) while sitting upright. This method is insensitive to mild but clinically important DS. To address this, a Provocative Dyspnea Assessment (PDA) scale was developed. For the PDA, DS is measured repeatedly while stepwise increasing respiratory stress. It is unknown how to best analyze the PDA scores or if the PDA is superior to the traditional method. Objectives: The objective was to compare traditional DS (DS-TRAD) with four methods of evaluating PDA DS data (DS-PDA). Methods: This was a planned secondary analysis of a prospective clinical trial. Patients admitted with AHF had their DS measured at three time points. At each time point a VAS score (0 indicating no dyspnea) was obtained in each of three sequential steps: sitting upright on oxygen, sitting upright off oxygen, and lying down off oxygen. DS was calculated using five methods: DS-TRAD: raw score while sitting upright. DS-PDA-1: scaled score at the last step tolerated. DS-PDA-2: scaled score at the step with the greatest raw score. DS-PDA-3: 3 scaled scores summed across steps. DS-PDA-4: 3 individual raw scores. These were then modeled with time as the independent variable controlling for treatment status. Model fit was assessed using signal to noise ratio (SNR; mean divided by model root mean square error) as well as Akaike and Bayesian Information Criterion (AIC and BIC). The analysis was repeated on data restricted to subjects adjudicated to having primarily AHF. Results: Of 131 subjects randomized, 118 were evaluable and 89% were adjudicated as having primarily AHF. Between 11% and 29% of subjects unexpectedly improved their VAS scores despite progressively difficult PDA steps. Model fit was best for DS-PDA-3 (SNR) and DS-TRAD (AIC and BIC). Restricting the analysis did not change these findings. Conclusion: Many patients had VAS scores that changed seemingly inconsistent with AHF physiology. The model fit analysis conflicted regarding a single best DS method. To address these findings, a multisymptom assessment will be incorporated into future VAS studies and clinical trial outcome measures will be used in models involving these DS methods. Basics of goal-directed ultrasound, probe choice, machine care, scan modes, artifacts (reverberation, shadow), image optimization, depth and gain variation peptide hormone mainly released by atrial myocytes in response to chamber distension. Previous studies have shown that MR-proANP is as useful as B-type natriuretic peptide (BNP) and closely correlated with NT-proBNP for diagnosis of AHF in dyspneic patients and may provide additional clinical utility when BNP is difficult to interpret. A current American College of Emergency Physicians (ACEP) clinical policy states that the addition of a single BNP or NT-proBNP measurement can improve the diagnostic accuracy compared to standard clinical judgment alone in the diagnosis of acute heart failure syndrome among patients presenting to the ED with acute dyspnea (Level B recommendation). We found a negative moderate correlation between myocardial EF and MR-proANP levels. Values are means or percentages with 95%CI. *p-value is from the group*time interaction term. **Site and diabetes category fell out of the MORT model because of the small number of events. the emergency setting as the first line medication to treat convulsive episodes and status epilepticus. The bioavailability of midazolam is not restricted by mode of administration. There have been no prehospital studies comparing the clinical response time of midazolam by two different routes, including the time necessary to administer the medication, such as IV line placement. Objectives: For seizing patients in the prehospital setting, we sought to compare the time to clinical response using intranasal (IN) versus intravenous (IV) midazolam. Methods: A retrospective review was performed of emergency medical services (EMS) and hospital records, before and after implementation of a protocol for the administration of IN midazolam by the Central California EMS Agency. We included patients with prehospital seizure treated in the prehospital setting over 5 years, between March 2001 and March 2006. Paramedics documented the dose of medication, route of administration, and response times using an electronic record. Clinical response was defined as seizure cessation or increase in Glasgow Coma Scale score of at least three points. Primary outcome variables were time from medication administration to clinical response, and time from patient contact to clinical response. Secondary variables included number of doses administered, and rescue doses given by an alternate route. Between-group comparisons were accomplished using t-tests and chi-square tests as appropriate. Results: 416 patients met inclusion criteria, including 276 treated with IV and 140 treated with IN midazolam. There were no differences in the rate of clinical response for IV vs IN administration (55% vs 57.8%, p = 0.3), or in the time from patient contact to clinical response (3 vs 4 min, p = 0.9). However, there was a trend toward more rescue doses required in the IN group (16% vs 24%, p = 0.06), and more patients in the IN group required a rescue dose by another route (17% vs 0.3%, p=0.0001). Conclusion: Given the difficulty and potential hazards of obtaining IV access in many patients with active seizure activity, IN midazolam appears to be a useful alternative in the prehospital setting. Background: Radiation exposure from CT scans ordered in the ED is controversial from medico-legal and basic science perspectives. It is unknown how many emergency providers (EPs) discuss CT scan risks with patients, if EPs are aware of such risks, and whether this is an appropriate discussion to have with patients. Objectives: To identify barriers to informed discussions between patients and physicians about radiation risk from CT scans, and to use empirical data and normative analysis to inform patient communication practices in the ED. Methods: A mixed-methods approach collected information from EPs and patients. Directed content analysis identified key issues from focus groups of a national sample of emergency physicians and local hospital patients. A separate written survey assessed provider knowledge and practice. Results: Focus groups (three each: 19 EPs, 27 patients) identified concepts consistent with core medical ethics principles: patients emphasized autonomy and non-maleficence more than physicians, and physicians emphasized beneficence. Patients spontaneously identified a need for a handout to read when waiting for tests. The written survey recruited 421 EPs at 31 of 32 EDs in Connecticut (response rate 81%); 98% report discussing radiation and 79% say that it should be discussed more than half of the time; 7% of respondents knew the doses of three common CT scans and 12% named the lifetime risk of malignancy from one CT scan. Female, pediatric, and recently graduated EPs answered correctly more often. 86% identified the patient's emergent condition as a reason not to discuss risk, 55% do not discuss risk with older patients, and 24% are too busy to discuss the risk. Only 5% of EPs called into question the published estimates of radiation risk. When presented with options for how to discuss risk with patients, physicians preferred discussion over other means, but were generally supportive of distributing patient handouts. Conclusion: A large majority of EPs report discussing radiation risk with their patients and that it should be discussed, but few actually know the doses or estimated risk published in the current literature. The normative view that radiation should be discussed is shared by patients and physicians, but is challenged by the constraints of emergency practice and lack of physician knowledge. An effective informational tool is needed to overcome this barrier. Methods: This was a retrospective cohort study using the National Hospital Ambulatory Medical Care Survey from [2005] [2006] [2007] [2008] [2009] . We first analyzed all patients younger than 18 years of age with a primary discharge diagnosis of headache. Data collected included treatment, diagnostic testing, and disposition. Second, we analyzed patients with a discharge diagnosis of migraine to assess the use of evidence-based treatment (EBT) defined as use of non-steroidal anti-inflammatory (NSAIDs), dopamine antagonists, or triptan medications. Results: Our first group included 448 ED visits from 2005-2009 and represented a national estimate of 1.7 million visits with a discharge diagnosis of headache. Our second group included a total of 95 visits and represented a national estimate of 340,000 visits with a discharge diagnosis of migraine. Median age for all patients was 13.1 years and 60% were female. All Headaches: Neuroimaging was performed in 37% of patients, and 39% underwent blood tests. NSAIDs and opioids were most commonly used for treatment. Migraine Headache: Dopamine antagonists and NSAIDs were most often used for treatment; however, approximately 40% of patients received non-EBT, most commonly with opioid medications and over 20% of patients underwent CT imaging. Conclusion: Headache in children remains a disorder with variability in ED evaluation and treatment. Despite evidence-based clinical guidelines for migraine headache, a large number of children continue to receive opioids and ionizing radiation in the ED in our nationally representative sample. Methods: This prospective cohort study enrolled a convenience sample of adult trauma patients at a Level I trauma center with MMTBI as defined by blunt head trauma followed by loss of consciousness, amnesia, or disorientation and a GCS 9-15. Patients were then assessed in person at one month post-injury. Serum samples were obtained from each patient within 4 hours of injury and measured by ELISA for SBDP150 (ng/ml). The primary outcome was global outcome as measured by the Glasgow Outcome Scale (GOS) score at one month post-injury. GOS was dichotomized into good and poor outcome: poor outcome was defined as death, vegetative state (VS), or severe disability (SD) and good outcome was moderate disability (MD) or good recovery (GR). SBDP150 levels are described with medians and interquartile range (IQR). Biomarker performance was assessed using area under the ROC Curve (AUC, 95%CI). Results: There were 98 MMTBI patients enrolled and 78 (80%) patients completed follow-up and were included in the analysis: 75 with a GCS 13-15, and 3 with a GCS 9-12. The median age was 40 years (range 18-83) with 45 (58%) males. At one month post-injury, there were 47 (60%) patients with GR, 17 (22%) with MD, 14 (18%) with SD, and no deaths or VS. Median serum SBDP150 levels within 4 hours of injury in those with GR, MD and SD were 1.52 (IQR 0.31-2.81), 2.55 (IQR 1.06-3.28), and 5.28 (IQR 3.12-8.18) respectively. When the groups were dichotomized into good and poor outcome, median SBDP150 levels were 1.88 (IQR 0.37-3.14) and 5.28 (IQR 3.12-8.18) respectively. The AUC for predicting poor outcome in the 14 (18%) patients was 0.86 (95%CI 0.77-0.95). Conclusion: An elevated level of SBDP150 measured within 4 hours of injury in the ED was associated with having a poor outcome at one month post-injury in MMTBI patients. An ongoing study is evaluating this in a larger cohort. Anti-platelet and Anti-coagulants Do Not Increase Traumatic Intracranial Bleeds in Elderly Fall Victims Darin Agresti, Donald Jeanmonod, Khalief Hamden, and Rebecca Jeanmonod St. Luke's University Hospital, Bethlehem, PA Background: Falls in elder patients are common presentations in emergency departments. Previous research examining the incidence of intracranial injury and the effects of anti-platelet and anti-coagulant medications have largely been derived from retrospective analyses of trauma databases, which may overestimate the incidence of injury in elders. Objectives: The purpose of this study was to obtain prospective data on rates of injury of all elders presenting to an emergency department with a fall mechanism and to determine if taking antiplatelets or anti-coagulants affects the rate of injury. included all ED patients, excluding physician-confirmed STEMIs, who had at least one cTnT or hsTnT drawn as a marker for symptoms suspicious of acute coronary syndrome (ACS). The primary outcomes included 7-and 30-day ED re-visits and 7-and 30-day revisits resulting in hospital admission. The secondary outcome was the proportion of suspected ACS patients discharged from the ED following their initial workups. Proportions were compared using Pearson's chi-square test. Results: Troponin assays were used to evaluate 6742 Ctrl, 6952 Pre, and 5822 Post patients; demographic characteristics between the groups were similar. There was no statistically significant difference in 7-or 30-day return ED visit rates for these patients. However, a statistically significant reduction in 30-day ED revisits requiring admission was observed following hsTnT implementation (ARR 1.3%, RRR 24%, p=0.011). Overall 3494 (51.8%) Ctrl, 33551 (51.0%) Pre, and 3172 (54.5%) Post patients were discharged from the ED following their initial workups (p<0.001). Conclusion: Replacing conventional cTnT with hsTnT is associated with a modest increase in the proportion of suspected ACS patients being discharged from the ED as well as improved decision-making reflected by a reduced rate of revisits resulting in hospitalization. The Effect of Implementing Objectives: Our objective was to evaluate hsTnT implementation on Methods: This time-series analysis involved a cohort of patients presenting to three adult tertiary care EDs undergoing hsTnT testing or conventional TnT (cTnT). A common ED information system database was used to collect outcome data. The hsTnT assay, along with an educational program, was implemented on January 31, 2012. Three ten- week time periods were compared: February 12, 2011-April 22, 2011: control period one year prior to hsTnT implementation (Ctrl); November 20, 2011-January 28, 2012: immediately pre-hsTnT implementation (Pre); February 12, 2012-April 21, 2012: immediately post-hsTnT implementation (Post). Subjects included all ED patients who had at least one cTnT or hsTnT drawn as a marker for suspected ACS with STEMIs excluded. Primary outcomes were ED length of stay (LOS), consultations, and admission rates. Categorical variables were analyzed using Pearson's chi-square test. The Mann-Whitney U test was used to compare median values between the three study periods. Results: We analyzed data from 6742 Ctrl, 6952 Pre, and 5822 Post patients; demographics between the groups were similar. A smaller proportion of patients had troponin assays performed in the Post period, compared to the Ctrl and Pre periods (12.5% Post vs. 15.8% Ctrl and 15.4% Pre). A significant reduction in median ED LOS was observed following hsTnT implementation: 6.53 h in Ctrl, 6.52 h in Pre, and 6.04 h in Post (p<0.001). The proportion of patients with third troponins ordered decreased from 7.9% in Ctrl and 7.2% in Pre to 5.9% in Post (p<0.001). There was no statistically significant change in the number of cardiology consultations or admissions following hsTnT implementation. Conclusion: Implementing hsTnT testing at three centers was associated with decreased testing and reduced ED LOS for patients with suspected ACS and no increase in cardiology resource utilization. This finding may be attributable to the educational program that accompanied the implementation of hsTnT testing. Background: Chest pain is a common presenting complaint to the emergency department (ED) with high rates of hospital admission. The majority of patients presenting with chest pain have benign causes and admitting these patients represents a significant cost and resource burden. Objectives: In June 2011, we instituted a low-risk chest pain protocol aimed at reducing the rate of hospital admission of patients with chest pain of unclear etiology. The protocol included exercise stress testing the afternoon or morning after discharge from the ED with immediate interpretation by a cardiologist and appropriate follow- Methods: To evaluate the effectiveness of the protocol, the research team reviewed patient charts from all visits with chief complaint of chest pain from March 2011 to June 2011 (pre-implementation) and from March 2012 to June 2012 (post-implementation). Patients were included in analysis if they were evaluated by a physician who was working both before and after institution of the protocol and had seen at least 10 patients with chest pain. Results: There were 1,139 patient visits in the pre-intervention group and 1,113 in the post-intervention group. Thirty physicians were included during this period. The mean admission rate was significantly lower after the intervention, decreasing from 54% to 44% (odds ratio=1.50; 95% CI 1.26 -1.77; p<0.001). During the same period, the incidence of acute coronary syndrome increased from 12% to 19%. In the patient group after the intervention, 118 patients were enrolled in the protocol. Twenty patients had TIMI scores greater than 1. Two stress tests showed evidence of ischemia leading to catheterization. One patient underwent PCI. No patient returned to the emergency department with acute coronary syndrome. Conclusion: After institution of a low-risk chest pain protocol, the admission rate decreased substantially despite an increase in the apparent acuity of patients. The protocol was utilized for a number of higher risk patients as well with no adverse events. Cost-effectiveness of a Multi-disciplinary Observation Protocol for Low-risk Acetaminophen Overdose in the Emergency Department Gillian Beauchamp, Kimberly Hart, Christopher Lindsell, Michael Lyons, Edward Otten, Stewart Wright, and Michael Ward University of Cincinnati, Cincinnati, OH Background: Acetaminophen is the most commonly ingested pharmaceutical taken in overdose. Given the increased use of emergency department (ED) observation units, development of a standardized 20hour intravenous N-acetylcysteine infusion, and methods to identify those at low risk for hepatoxicity, our center developed a multi-disciplinary observation unit protocol as an alternative to hospitalization for low-risk acetaminophen overdose. Objectives: To analyze the cost-effectiveness of treating low-risk acetaminophen overdose in an observation unit when compared with inpatient admission. Methods: We developed a standard decision analytic model (TreeAge 2011, Williamstown MA) using the societal perspective and a willingnessto-pay threshold of $50,000 per quality adjusted life year to evaluate the treatment of low-risk acetaminophen overdose in three settings: ED observation unit, hospital admission to a floor bed, and to the intensive care unit. Facility and professional costs were estimated from Medicare 2012 data and from the 2012 National Physician Fee Schedule. Data from published studies were used to estimate risk of clinical outcomes. Results: In the base-case scenario, use of an ED observation unit is the dominant strategy with a cost of $11,600 and 25 quality adjusted life years. Admission to floor and ICU settings resulted in identical quality adjusted life years and costs of $15,224 and $34,244, respectively. Sensitivity analyses found that given similar outcomes, the key drivers of the decision were dependent upon cost, eligibility for the observation unit protocol, and the probability of admission to the hospital. If less than 51% of observation unit-eligible patients require inpatient care after completing observation unit therapy, admission to the observation unit would be the most cost-effective choice. Conclusion: A multi-disciplinary ED observation unit protocol for acetaminophen overdose offers a cost-effective alternative to inpatient admission, provided that the majority of patients are not subsequently admitted to the hospital. Objectives: To quantify the potentially avoidable CPOU admission rate, examine provider variability, and determine patient and provider characteristics associated with potentially avoidable CPOU utilization. Methods: We examined a consecutive cohort of chest pain patients evaluated in an ED-based CPOU using prospective and retrospective CPOU registry data. Patients were risk-stratified based on the ACC/AHA framework, age, and ECG findings. Very-low-risk was defined as age < 35, provider global assessment of low-risk, and normal or non-diagnostic ECG. Patients identified as very-low-risk were considered potentially avoidable CPOU admissions. Each encounter was associated with a board certified emergency physician allowing calculation of individual physicians' potentially avoidable CPOU utilization rates. Patients were followed for 30 day major adverse cardiac events (MACE), defined as the composite of death, acute myocardial infarction, and coronary revascularization. Results: Over 33 months, the registry included 1731 chest pain patients. The study definition of potentially avoidable CPOU admissions was met by 10.1% (95%CI 8.7-11.6%). The median rate of provider's potentially avoidable CPOU utilization was 10% [interquartile range 5. [9] [10] [11] [12] [13] .6%] and varied from 1.9% to 18.4%. No patient with a potentially avoidable CPOU admission had a MACE within 30 days. Patient-level predictors of potentially avoidable CPOU admission included male sex, chest pressure, sharp chest pain, vomiting, lightheadedness, and absence of hypertension or hyperlipidemia. Provider-level predictors included recent residency graduation (<5 years),part-time status, and moderate or high CPOU utilization. Methods: A prospective observational study funded by NHLBI that enrolled 2990 MI patients, aged 18-55 years, from 104 US hospitals from 8/21/08-1/5/12 using a 2:1 female/male enrollment design. Data were collected by patient interviews during hospitalization and review of medical records. CMS goal for D2B was defined as 90 minutes for non-transfer and 120 minutes for transfer; D2N 30 minutes. Clinically significant variables chosen a priori (age, sex, race, cardiac risk factors (CRF), presentation time, hemodynamic instability, and prior MI), were entered in a multivariate logistic model. In VIRGO, 46% of women vs 58% of men had a STEMI (OR 1.61; 1.38, 1.88) . The median age was 48 (IQR 44, 52) , 19% were nonwhite, and 97% had > 1 CRF. Women were more likely than men to smoke (68% vs 61%, p=0.01), be obese (53% vs 43%, p=0.02), and have diabetes (DM) (38% vs 23%, p<0.01). In adjusted analysis, factors associated with delays were younger age, presenting off-hours, with significant interactions for sex*race and sex*diabetes (Table 2 CRFs, prior MI, instability, other non-significant interactions not shown). Conclusion: Young women with STEMI are less likely to meet CMS guidelines for reperfusion. Mode of Hospital Arrival in ST-Elevation Myocardial Infarction: Ethnic and Language Differences in an Urban STEMI Receiving Center Tasneem Bholat 1 , Stephanie Y. Donald 2 , Robert S. Lee 1 , Rishi Kaushal 1 , Katrine Zhiroff 1 , Quang George Washington University, Washington, DC Background: Regional systems of care improve reperfusion times and survival for patients with ST-segment elevation myocardial infarction (STEMI) because patients are rapidly transported to specialized centers for primary percutaneous coronary intervention (PCI). The characteristics and outcomes of patients who do not take advantage of these systems of care have not been well-studied. Objectives: The objective was to determine the clinical and demographic differences between patients with STEMI who selfpresented to the emergency department compared to those who were transported by emergency medical services (EMS). The clinical and demographic data for 232 consecutive patients undergoing primary PCI at an urban STEMI receiving center (SRC) were stratified by treatment within the SRC system versus after self-presentation (salk-in) with STEMI. Comparisons between groups were performed using the t-test for continuous data and chi-square or Fisher exact test for categorical data. Results: 125 patients presented in the SRC system, and 107 patients walked in. Hispanic patients presented more often as a walk-in than within the SRC system (51% versus 30%, p<0.001) and patients in the walk-in group were twice as likely to be non-English speaking (41% versus 20%, p<0.001). Black patients were more often in the SRC group (28.8% versus 11.2%, p <0.01). Walk-in patients had longer symptom duration (290 minutes versus 90 minutes in the SRC group, p=0.002), and longer door to balloon times (86 minutes versus 66 minutes, p<0.001). Conclusion: At an urban STEMI receiving center, Hispanic and non-English speaking patients are more likely to self-present, and they may not benefit from regionalized care. Patients who self-presented had longer total ischemic and reperfusion times, measures which contribute to worse outcomes after acute MI. These differences have complex roots and should be addressed in future public health initiatives. Background: Acute kidney injury (AKI) has been shown to increase the risk of immediate and delayed requirement for renal replacement therapy and reduce both short-and long-term survival in patients with critical illness. IV contrast is routinely administered in the course of acute STEMI therapy. The incidence of AKI in the setting of contrast administration is not well defined. Objectives: The authors have investigated the incidence of AKI and short-term mortality following an activated STEMI alert at a tertiary referral and academic center. Methods: This retrospective chart review encompasses 27 months, from January 2010 to March 2012. Inclusion criteria were STEMI patients taken for cardiac catheterization, excluding patients receiving hemodialysis due to end-stage renal disease (ESRD). Data collected included demographics, the dose of contrast administration, serum creatinine in the ED and during hospitalization, and an assessment of AKI using the RIFLE (Risk, Injury, Failure, Loss, ESRD) criteria based on the patient's baseline and peak serum creatinine. Results: 257 patients were analyzed over the study period. Three patients with current ESRD undergoing hemodialysis were excluded. The median contrast volume was 200 ml, IQR 145-275 ml. 8.6% suffered AKI after cardiac catheterization with a 1.5 fold or greater increase in serum (95% CI 5.2-12.0%). Eight of 257 (3.1%, 95% CI 1.0-5%) met AKI criteria for RIFLE stage R, (risk), 2% met criteria for RIFLE stage F (failure) (95% CI 0.0-1.8%), and two patients required hemodialysis. On multiple regression analysis, contrast volume was significantly associated with increased risk of developing AKI (P=0.0170), after adjusting for age and sex. Of the 22 patients with AKI, 8 died (36.4%, 95% CI 16.3-56.5%), and of the 235 patients without AKI, 2 died (0.9%, 95%CI -0.3 to -2.0%) OR: 66.6 (95%CI 12. 9-343.4 ). In this single-center study of STEMI patients nearly 10% developed AKI following IV contrast administration and AKI was associated with an OR for mortality of 66. These findings follow similar trends published among critically ill patients with AKI. Recognizing risk factors for acute kidney injury and mitigating exposure to nephrotoxic agents in the emergency department warrants further investigation. Background: ECGs are widely available technology and often one of the first tests ordered in patients presenting to the emergency department with acute chest pain. aVL is the only ECG lead facing the superior part of the left ventricle, making it the only lead that is truly opponent to the inferior wall. As such, it is the optimal lead for early detection of inferior myocardial infarction (MI). However, the prognostic significance of subtle "flattened" ST/T segment changes in the aVL lead remains unclear. Objectives: We sought to determine the prognostic significance of subtle flattening and negative deflections of ST/T segment changes in the aVL lead. The clinical outcome of interest was MI, occurring within 24 hours of the ECG. Prognostic significance was quantified with sensitivity, specificity, positive and negative predictive values, and relative risk. Methods: Our inquiry was a retrospective cohort study. We performed a query of the past 955 admission ECGs for patients who presented to UNMC ED with MI/angina equivalent symptoms, gathering aVL ST-Elevation (STE) metrics from MUSEâ Cardiology Information System. We excluded all pediatric patients and any patient encounters without available ECG STE metrics. All encounters were analyzed for the development of subsequent MI within 24 hours on ECG, diagnostic catheterization, echocardiography, or cardiac enzymes. Results: 2.62% of all patients (with no ECG risk stratification) developed MI within 24 hours, but patients who had measurable negative deviation in aVL ST/T segment appear to be at progressively higher risk.Relative risk (RR) of MI was 1.40 for aVL STE values between 0.000 and -0.025 mV (negative deviation from the baseline of less than 25% of one box on standard ECG paper), and RR of MI was Objectives: To evaluate the association between LOS and patient disposition in the pediatric ED. We hypothesize that admitted patients will have a longer total LOS and longer time from door to provider (physician or mid-level provider). Methods: A prospective, observational, multisite cohort study of a 24-hour consecutive sample of pediatric ED patients was conducted on 11/14/11 at six U.S. EDs: three children's hospitals, and three general EDs of which two had separate pediatric areas, and one integrated adult and pediatric area. Demographic information and time intervals were collected. Descriptive statistics (median, first quartile (Q1) (25th percentile), and third quartile (Q3) (75th percentile)) were used for time intervals overall and by disposition. The Mann-Whitney (MW) test was used to compare intervals by disposition. Results: 641 pediatric patients were screened, with a final sample size of 625 eligible patients (10 with unknown admission/discharge status; 6 transferred); 67 admitted and 558 discharged. Ages ranged from 0-21 years, with a mean age of 7.54 (AE 6.12). Subjects were Caucasian (36.81%), African-American (25.28%), Hispanic (22.75%), Asian (3.48%) , and other/unknown (11.58%); and 52.35% male. Most subjects arrived by private vehicle (91.63%), basic life support (5.53%) and advanced life support (2.21%) ambulances. Overall, the median total LOS was 163 minutes (min) (Q1 = 117, Q3 = 226). Admitted patients had a longer total LOS (median = 239 min, Q1 = 181, Q3 = 341) than discharged patients (median = 157 min, Q1 = 113, Q3 = 217) (MW, P < 0.0001). Overall, the median time from door to provider was 51 minutes (Q1 = 21, Q3 = 88). Discharged patients had alonger wait times from door to provider (median = 56 min, Q1 = 22, Q3 = 90) than admitted patients (median = 29 min, Q1 = 13, Q3 = 54.5) (MW, P < 0.0001). Among admitted patients (information was only available for 36 of the 67), the median time from admission decision to departure was 76.5 min (Q1 = 53, Q3 = 112.5). Conclusion: Admitted patients had longer total LOS than discharged patients. In our study the discharged patients had longer wait times from door to provider as compared to admitted patients. In the ED, focusing on ways to improve the time from door to provider for lower acuity patients may improve overall ED LOS and patient satisfaction. Results: The four separated years (2000, 2003, 2006, and 2009 ) of national discharge data included 128,000 to 142,000 weighted annual discharges with bronchiolitis. Between 2000 and 2009, the incidence of bronchiolitis hospitalization decreased from 17.7 (95% CI, 16. [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] .7) to 15.1 (95% CI, 14.4-15.9) per 1000 person-years (P for trend <0.0001). Pains, the Institute of Medicine outlined major systemic obstacles to emergency care including pediatric readiness, fragmentation of care, limited access to specialists, and overcrowding. Currently there are insufficient data detailing the specific needs and challenges in the management of acutely ill and injured children who require transfer to a higher level of care, especially in underserved rural communities. Prior to implementing a formal system of regionalized care, it is important to understand existing patterns of inter-hospital transfers related to ED visits. Objectives: We sought to determine the epidemiology of pediatric transfers from urban and rural emergency departments to a higher level of care, including rate of transfer, patient characteristics, and reason for transfer. Methods: We conducted a retrospective study of the National Hospital Ambulatory Medical Care Survey from 1995 to 2010. Eligible children were < 18 years of age treated in a U.S. emergency department (ED) and transferred to another hospital after initial evaluation. Results: Of all 283,232,058 pediatric ED visits, 0.32% resulted in a transfer, yielding a population-based estimate of 900,100 transfers nationally during this period. There was no statistically significant difference in the rate of pediatric transfer from urban versus rural EDs (P = 0.14). Children transferred from rural EDs were older and more likely to arrive by emergency medical services than children transferred from urban EDs (12.1 versus 8.2 years of age, P < 0.01). Children from rural EDs were more than twice as likely to be transferred for a psychiatric indication (43.5% versus 19.5%, P < 0.01). ED length of stay for transferred patients was higher in the urban compared to the rural setting (median time 238 minutes versus 150 minutes, P < 0.01). Conclusion: Pediatric ED transfers to a higher level of care are uncommon in the U.S. Despite known differences in pediatric capability between urban and rural hospitals, transfer rates are similar. Rural children have additional obstacles to care, especially in access to emergency mental health services. Programs to study and implement regionalization of care should consider a broad range of hospital types and consider diverse patient populations, targeting improvement in coordination of care, transfer times, and outcomes. High Emergency Department and Urgent Care Use When Sick Children Cannot Attend Child Care Andrew N. Hashikawa University of Michigan, Dexter, MI Background: Previous studies have shown that children in child care are sick more often with colds and diarrheal illness than children who are not in child care. Children with mild illness are often unnecessarily excluded from child care, necessitating a health care visit. Little is known about the medical care-seeking behavior of parents with sick children excluded from child care. Objectives: To examine the prevalence of parents with children in child care who seek urgent medical evaluation of a sick child and to better understand where parents seek medical care for sick children who cannot attend child care. Methods: In May 2012, we conducted a cross-sectional, internetbased survey of a nationally representative sample of the U.S. population (n=2,144) . This survey was conducted as part of the C.S. Mott Children's Hospital National Poll on Children's Health, a recurring online survey of parents and non-parents. Parents of children age 0-5 in child care (n=310) responded to questions regarding illness that caused absence from child care, parents' opinions regarding work absenteeism, and parents' medical care-seeking behavior when their children could not attend child care. Results: The parent survey participation rate was 57%. 62% of parents reported their sick children could not attend child care at least once in the past year and 88% of parents sought acute medical care. Parents reported taking their sick children for medical care at: primary care physician (81%); urgent care (UC) (26%); and emergency department (ED) (25%) (parents could endorse more than one option). Bivariate analysis (unadjusted) indicated UC/ED use was higher among black parents (vs. white; OR=3.5, 95% CI 1. 2-10.8) , single/divorced (vs. married; OR=4.2, 1.4-12.7), or with income <$60,000 (vs. ! $60,000; OR=2.4, 1. 1-5.8) . 30% of parents said a doctor's note is required in order for a sick child to return to child care and 21% of parents' employers required a doctor's note for time off work to care for a sick child. Conclusion: While many parents take their sick children to primary care physicians, a substantial proportion of parents seek care in either an UC or ED. Many parents also reported that child care required a medical note allowing the child to return. The adoption of new AAP *Variance-weighted logistic regression adjusting for age, sex, race/ethnicity, primary payer, admission day (weekend vs. weekday), comorbidities, US region, hospital location, teaching status, and children's hospital designation. †High-risk medical condition was defined as history of prematurity or at least 1 complex medical condition, previously defined using ICD-9-CM codes in 9 categories of illness (i.e., neuromuscular, cardiovascular, respiratory, renal, gastrointestinal, hematology or immunologic, metabolic, malignancy, and other congenital or genetic defect disorders). Objectives: To determine the effect of implementing an evidencebased clinical pathway (based on NIH guidelines) on timely and appropriate administration of steroids for children with moderatesevere asthma exacerbations presenting to the ED. Methods: Prospective, before-after study of pediatric ( 21 yrs) patients with a primary diagnosis of asthma (ICD-9 code 493.xx) and moderate-severe exacerbation treated in a general academic ED. Moderate-severe exacerbation was defined as requiring ! 2 (or continuous) bronchodilators. Retrospective data on steroid use were collected for eligible visits between 2006 and 2011. A care pathway was implemented September 2011. Clinicians (nurses, respiratory therapists, physicians) attended an educational session to review the evidencebased guidelines and clarify areas of uncertainty. From Sept 2011-Feb 2012, research assistants identified pediatric asthma patients in the ED and gave copies of the pathway to treating physicians and nurses. Email reminders were sent to treating clinicians when the pathway was not followed. Charts were abstracted to compare the proportion of visits receiving steroids 1 hour of ED arrival, timing of bronchodilators, and use of chest radiographs pre-and post-pathway implementation. Results: 1025 pediatric ED visits with moderate-severe asthma exacerbations occurred 2006-2012. Baseline demographics and presenting characteristics in the pre-(n=822) and post-implementation (n=203) groups were comparable. After pathway implementation, steroid administration increased (79% vs 97% p<0.001), and steroid administration 1 hr of ED arrival increased (22% vs 46% p<0.001). Mean time from ED arrival to steroid administration was lower in the post-intervention group (125 vs 85 min p<0.001). The number of bronchodilators administered 1 hr of ED arrival increased (0.98 vs 1.22 p=0.001); the proportion of visits with three (or continuous) bronchodilators received 1 hr also increased (5% vs 9% p=0.02). The use of chest radiographs was reduced (43% vs 33% p=0.008). Conclusion: Implementing an evidence-based clinical pathway can be associated with improved adherence to NIH guidelines for pediatric ED visits with moderate-severe asthma exacerbations. Broselow Tape Background: In a critically ill or injured child, actual measurement of weight is not always possible. The Broselow tape (BT) is an important tool to predict child weight based on the height. Although BT has previously been validated, given the increasing prevalence of obesity it behooves clinicians relying upon this resuscitation aid to revisit the issue. Objectives: To evaluate the accuracy of color-coded BT in weight estimation, and the influence of obesity on its accuracy. Methods: Design: Observational and retrospective. Setting: Urban hospital. Participants: Children up to 96 months of age presenting during 2008-2010. We recorded each child's age (months), actual weight (kg) and height (cm) . Based on the height, weight estimation was performed using the color-coded BT. Actual weight was compared with the predicted weight based on the BT. The presence of any medical condition that would substantially affect weight and/or height was an exclusion criterion. Separate logistic regression models were performed to evaluate the association of age or BMI percentile with weight underestimation, while adjusting for sex. Results: 547 medical records were reviewed. There was a discrepancy in 235 (43%) children. BT underestimated weight (actual weight higher than predicted) in 167 (71%) children and over-estimated weight (actual weight lower than predicted) in 68 (29%) children. Out of 167 under-estimated children, 139 were by one color zone, 22 by two color zones and six by >2 color zones. When stratified for age, 51 were between 24-48 months, 42 between 48 and 72 months, and 36 were over 72 months. In 68 over-estimated children, 66 were by one color zone and two by two color zones. Children with BMI percentile 85-95% were more likely to have weight underestimated compared to < 85% (OR 5.53, p<0.001 and OR 26.34, p<0 .001). Children >72 months showed higher odds, but this did not approach statistical significance. Males had higher odds of weight underestimation in all models. Conclusion: In our population, BT was inaccurate in predicting weight in 42% of children. Greater deviation was noted in children older than 72 months. Higher BMI percentile, age > 72 months, and male sex had higher odds for underestimation. Methods: Pediatric emergency nurses and EMS providers received a very brief explanation of each of the seven weight estimation methods. Each rater then estimated the weights of five child volunteers of various age and weight in a non-clinical setting using each of the seven methods. Actual weight was determined with a calibrated scale. The speed of each method was recorded using an electronic software program. A 10-point Likert scale was used to evaluate the ease of use for each method where 1 represented "easy" and 10 "difficult." Results: A total of 80 raters (44 nurses, 36 EMS providers) and 80 children ( 8, 80.2] . When overweight and obese children (30% of study participants) were considered separately, weight was predicted within 20% of actual for: PE (28%), APLS (14%), Broselow (46%), DWEM (59%), LO (50%), 2DT (83%), 3DT (74%) of children. The median time (sec) to complete each method was: PE (22.8) , APLS (13.5), Broselow (37.8) , DWEM (46.2) , LO (13.7), 2DT (65.6), 3DT (60.2) . Repeated use significantly increased speed for all methods. Each method rated Likert scores from 1-10 with all methods averaging between 2 and 4. Conclusion: Despite slower performance times in untrained raters, the 2D-and 3D-TAPEs more accurately estimate weight, particularly in overweight/obese children. Background: Ketamine is routinely used for deep sedation in children for brief painful procedures. The standard method of ketamine administration (1.5-2 mg/kg infused intravenously over 30-60 seconds) results in prolonged recovery (60-120 min). We hypothesized that rapid bolus (over <5 seconds) of a small dose of ketamine will achieve a brief period of deep sedationwith rapid recovery without increased adverse effects. Objectives: The purpose of this study was to find the minimum dose and recovery time of rapidly infused ketamine that achieves 3-5 minutes of effective sedation in 95% of children (ED95) undergoing abscess incision and drainage (I&D) or fracture reduction in the emergency department. Methods: Twenty healthy children (ASA 1-2) in each age group, 2-5, 6-11, or 12-17 yrs, for each procedure, receive bolus doses of ketamine determined by Up-Down method prior to the procedure. Additional ketamine is given, if needed; only the first dose is analyzed for effectiveness. The next patient in the group receives a bolus dose 0.1 mg/kg smaller if the prior patient's sedation was effective, or 0.1 mg/kg larger if it was ineffective. Sedation efficacy is determined by three faculty reviewers, blinded to the dose, independently grading the patient's response during the first 5 minutes of a video of the procedure. Aldrete recovery score and response to standard verbal command (every 5 min) determine recovery. Using the Up-Down method, we determined the median effective dose or ED50 (dose effective in 50% participants). ED95 is calculated from ED50. We have enrolled 20 children 2-5 yrs old undergoing abscess I&D and, so far, 10 children 6-11 yrs old undergoing fracture reduction. ED50 is 0.9 mg/kg (0.6-1.2) and ED95 is 1.1 mg/kg (1.0-6.1) for 2-5 yrs old children for abscess I&D. For 6-11 yrs old children for fracture reduction, ED50 is 0.5 mg/kg and ED95 is 0.6 mg/kg. Mean recovery time to Aldrete score 10 (full recovery) was 24.5 minutes for the abscess group and 20.5 minutes for fracture group. No participant experienced a serious adverse event. Parent satisfaction with the procedural sedation was high (24 out of 24). Conclusion: For rapidly infused ketamine, we determined the ED95 for abscess I&D in 2-5 yrs old children and a preliminary ED95 for fracture reduction in 6-11 yrs old children. This technique results in rapid recovery with no apparent increase in adverse events. This pain and anxiety makes IV placement more difficult, resulting in multiple attempts before successful IV placement. Needle Free Jet-Injection system with buffered lidocaine (J-tip) has been shown to reduce pain for IV insertion, but there is no literature evaluating the relationship between J-tip use and successful IV placement. Objectives: We hypothesized that J-tip use would improve IV placement success in children. Methods: This is a retrospective cohort study of children ages 1 to 18 years with emergent IV placement. A random sample of children was selected for each of three age groups: 1) 1 to 2 years, 2) 3 to 6 years, and 3) 7 to 18 years. The standard treatment group (PRE) included children with IV insertions from January 2009 through January 2010 when J-tips were not available. The J-tip treatment group (POST) included children with IV insertions from December 2010 through December 2011 who received a J-tip once its use was established in the ED. Chi-square test was used to compare the proportion of first attempt success as well as the effect of diagnosis, sex, race/ethnicity, and history of prematurity on first attempt success. Results: Three hundred children (150 PRE and 150 POST) were enrolled in each of the three age groups, totaling 900 children. The most common diagnoses were vomiting/dehydration (31.0%), trauma (19.8%), and infection (15.8%). No differences in sex, race/ethnicity, history of prematurity, or diagnoses were found in the PRE and POST groups in any of the age groups. Overall, first attempt success was similar between the PRE and POST groups: PRE 67.6% v. POST 70.0% (mean difference +2.4 %, CI = -3.6% to +8.4%). No difference was found in any of the age groups: year, gastroenteritis is among the most common complaints seen in a PED. Being able to quickly and accurately determine a patient's level of dehydration allows for the early initiation of appropriate management. However, at present, dehydration can only be determined by a thorough physical examination and/or laboratory results. The former cannot determine acidosis; the latter is time consuming and invasive. Objectives: This study evaluated the utility of end-tidal CO2 (ETCO2) in predicting the level of dehydration and acidosis in children with suspected gastroenteritis in a PED. Methods: This prospective study enrolled a convenience sample of children presenting to a tertiary care urban PED with suspected acute gastroenteritis from June-August 2012. The patient's ETCO2 level was measured using nasal capnography. The treating physician was blinded to the ETCO2 results. The enrolling and treating physicians then used the Clinical Dehydration Scale (CDS) to record the patient's level of dehydration. When available, lab results were also recorded. The primary outcomes were 1) the association between ETCO2 and the CDS, and 2) the association between ETCO2 and venous bicarbonate (HCO3). Results: Preliminary: There were 50 children evaluated. Mean age was 4.8 (SD 4.0) (6 months to 17 years) and 3/50 (6%) were admitted. 32 children (64%) had no dehydration, 16 (32%) mild dehydration, and 2 (4%) moderate dehydration. The correlation between the total CDS and ETCO2 was r=-0. 36 Conclusion: ETCO2 was significantly correlated with both CDS and HCO3. However, the correlation between ETCO2 and HCO3 was much stronger than the correlation between CDS and HCO3. These findings suggest that ETCO2 may serve as a rapid adjunct in measuring dehydration and acidosis. Financial and Quality Impact of Voice Recognition versus Dictation/Transcription on Emergency Medicine Records Roshanak Didehban, and Stephen J. Traub Mayo Clinic, Scottsdale, AZ Background: Voice recognition (VR) is an inexpensive alternative to dictation/transcription (DT). Studies of VR-generated medical records (often within radiology and pathology) report error rates between 1.5 and 42%. Relatively few studies have addressed the use of VR within EM. Objectives: We hypothesized that VR would be less expensive than DT, but would produce more errors. We compared the percentage of charts that failed quality review based on an overall score involving several individual quality criteria, the percentage of charts with medical errors (deemed the most significant of all individual quality criteria), and the per-patient cost of VR vs. DT. Methods: DESIGN: Retrospective observational study of VR medical records from December 2009 and DT medical records from 2Q2009-2Q2010. SETTING: Tertiary academic emergency department. PARTICIPANTS: 15 EM physicians and 104 transcriptionists. INTER-VENTIONS/OBSERVATIONS: For the quality audit, individual charts were given weighted scores based on a standard methodology involving several pre-specified and pre-defined criteria. One quality management professional conducted the audit of VR records, and three quality management professionals completed the audit of DT records. Quality results are reported as (raw data = percentage; 95% CI for percentage); statistical comparison of quality data was performed via chi-square test. Financial costs of VR were estimated assuming a 2-year utilization of VR based on actual VR start-up costs. DT costs are actual. All costs are reported in dollars. Results: We analyzed 90 VR and 384 random DT medical records. The number of reports that failed overall quality review was higher in the VR group (21/90 = 23%; 14.6%-34%) than the DT group (40/384 = 10.4%; 7.6%-13.9%) (p = 0.001). The number of records with medical errors was also higher in the VR group (59/90 = 65.6%; 55. 7%-75 .4%) than the DT group (37/384 = 9.6%; 6.7%-12.6%) (p < 0.0001). The projected cost per patient for VR was $0.56 and the actual cost per patient for DT was $15.26, representing a cost reduction of $14.70 (96%) per patient. Conclusion: VR is much less expensive than DT, but quality was poorer as measured by the number of charts that failed quality review and the number of charts with medical errors. Impact Objectives: Determine the effect of varying key processes in the evaluation of OU CP patients on total length of stay (LOS) and shortstay percentage, defined as LOS < 6 hours (h). Our hypothesis was that specific changes would have differential effects on LOS, allowing us to prioritize interventions. Methods: We created and validated a simulation model of the OU at an academic tertiary-care hospital with 59,000 ED visits and 6,678 OU visits in 2011 using Arena 13.5 software. The OU is a distinct unit within the ED, managed by emergency physicians. Simulation inputs were based on historical data from 1/1/11 to 6/30/11 (N= 835) . We tested the effect on LOS of three feasible interventions on OU CP patients: 1) decreasing serial troponin timing from 6h to 3h, 2) reducing the absolute percentage of stress testing by 10%, and 3) increasing daily hours of stress testing availability by 2h, until 5pm. We tested for a change in mean OU LOS using a T-test and a change in the short stay percentage using a chi-square test. Results: All three interventions decreased the total LOS while also increasing the short-stay percentage. Reducing serial troponin timing lowered the mean LOS from 9.53h(SD =6.13h) to 7.44h(SD=5.97h), p<0.0001. The reduced interval also increased the short-stay percentage from 24.6% to 57.9%, p<0.0001. Lowering the percentage of stress testing changed the mean LOS from 9.53h(SD=6.13h) to 8.27h (SD=4.57h ), but no change in the short-stay percentage, p=0.2. Finally, by extending the hours of stress testing availability, the mean LOS decreased from 9.53h(SD=6.13h) to 8.91h(SD=4.79h), p<0.0001, but no change in the short-stay percentage, p=0.9. Implementation of all three at once is expected to reduce total yearly CP OU time by 39%(7,557h). Conclusion: In this simulation model, reducing serial troponin timing has the largest overall effect on OU LOS, with smaller gains from stress test interventions, since those affect fewer patients. The increase short stay visits has important implications for use of OUs. CP patients managed in < 6h do not qualify for observation services and should be entirely managed in the ED, creating new capacity in the OU. Objectives: To test the hypothesis that after device is adjusted for, no other variables will retain an association with hemolysis. Methods: Observational cohort study of routine blood specimens obtained by ED staff. Data were collected on device, needle size, site, fullness of tube, tourniquet time, and difficulty of stick. Specimens were sent to the laboratory by vacuum powered tube system. A standard automated process that measures free hemoglobin was used to identify hemolysis. We performed a simple dichotomous stratified tabular analysis by device (butterfly vs. angiocatheter), examining any remaining associations after adjustment. We calculated the 95% CIs around the difference of hemolysis rates per 100 specimens. Results: We analyzed 4,513 specimens. The hemolysis rate was 14.6% with an angiocatheter vs. 2.7% with a butterfly for a difference = 11.9% (95% CI 10.2%, 13.4%). After tabular stratification by device the rates associated with the other variables were as follows: Conclusion: The rate of hemolysis is markedly higher when blood is drawn through the angiocatheter than butterfly. Consistent with our hypothesis, no other features of the blood draw were associated with hemolysis in samples drawn by butterfly. However, when angiocatheters were used, the rate of hemolysis was higher for smaller gauge needles, tubes < half full, long tourniquet time, and difficult sticks. These data indicate that use of butterfly devices can markedly decrease hemolysis rates without the need to change any other characteristics of blood drawing. Background: Over the past 20 years, there has been a steady increase in ED patient volume and wait times. The desire to maintain or decrease cost while improving throughput requires novel approaches to patient flow. The break out session "Intervention to Improve the Timeliness of Emergency Care" at the June 2011 AEM consensus conference "Interventions to Assure Quality in the Crowded Emergency Department" posed the challenge for more research of the split Emergency Severity Index (ESI) 3 patient flow model. A split ESI 3 patient flow model divides high variability ESI 3 patients from low variability ESI 3 patients. Objectives: By segregating low variability ESI 3 patients and managing them in a flow pattern consistent with ESI 4 and 5 we hoped to improve door to discharge turn-around-time (TAT) for all patients who presented while the process is in place. Methods: A retrospective case-control chart review at an urban academic Level I trauma center seeing over 70,000 adult patients a year. Cases consist of adults who presented from 9am to 11pm from June 1st to December 31st 2011 and were discharged. Controls were patients who presented the same time and day but in 2010. Visit descriptors included age, race, sex, ESI, and first diagnosis. The first diagnosis was coded based on methods used by Agency for Healthcare Research and Quality to codify ICD-9 into disease groups. Linear models compared log-transformed TAT for cases and controls. A front-end ED redesign involved creating guidelines to split ESI 3 patients into low and high variability, a hybrid sort/triage RN, an intake area (former low acuity area), with an adjacent internal results waiting room and a treatment area for low variability ESI 3 patients who require further management. This was done without additional beds. The areas were staffed with an attending EM physician, a physician assistant, three RNs, two clinical care techs, and a scribe. Results: There was a 5.9% decrease, 2.58 hrs to 2.43 hrs, in the geometric mean of TAT for discharged patients from 2010 to 2011, with a 95% CI of 4.5% to 7.3% (2010 n=20215, 2011 n = 20653). Abdominal pain was the most common diagnostic grouping (n=2484, 2464) with a reduction in TAT of 14.8%, 4.37 hrs to 3.8 hrs, and a 95% CI of 11.6% to 18.1% Objectives: We performed a natural experiment to test a hypothesis that a multi-pronged approach would reduce the incidence of CAUTI attributed to catheters inserted in the ED. Methods: This was a prospective, observational study of ED-based interventions to decrease CAUTI at an academic, tertiary ED with an annual volume of 110,000 visits. National Healthcare Safety Network criteria were used to identify CAUTI. CAUTIs were attributed to the location of catheter insertion when they occurred within 7 days. We used October 2008 to February 2009 for historical control. Interventions began during 2009. We used June through October 2012 to assess our sustained results. Interventions included mandatory nurse education on insertion and maintenance techniques, required two-person insertion, direct observation by nursing leadership, individual feedback for all cases of infection, and documentation audits. Physicians received education on appropriate indications for urinary catheters which were documented during computer order entry. Data were analyzed with descriptive statistics and Fisher's exact test for dichotomous variables. Results: Comparing the control and follow-up periods, there was no significant difference in the incidence of CAUTI attributed to insertion among ED-inserted catheters (1.2% and 1.6%, respectively). However, fewer catheters were inserted in the ED during the follow-up period (823 versus 317; p<0.0001). During the same periods, hospital-wide CAUTIs decreased from 126 to 70. The number of CAUTIs attributed to ED insertion during each period was 10 and 5. The proportion of hospital-wide CAUTIs attributed to ED insertion was not significantly changed (7.9 and 7.1%, respectively). Conclusion: In our experience, the rate of CAUTI attributed to insertion technique in the ED is low. However, its contribution to the hospital's CAUTI burden is measurable. Our most effective intervention was to reduce urinary catheter insertions through education about appropriate use. Background: Medication non-adherence after ED discharge is associated with recurrent ED visits, resulting in significant cost to the health care system. Few studies have evaluated the effect of take home medication (THM) packs on adherence, which may be an important way to reduce disparities Objectives: Our ED has THMs for many common ED discharge prescriptions, such as antibiotics and analgesics. Our aim was to compare returns to the ED between patients given a THM vs. patients receiving equivalent paper prescriptions. Hypothesis was patients receiving THM are less likely to return to the ED within 30 days. Methods: This was an observational, prospective cohort study in an urban, university-affiliated, Level I trauma center. Consecutive adult patients discharged from the ED with either a THM or equivalent paper prescription identified through daily pharmacy reports were included. Patients were excluded if <18, discharged to a location other than home, received both THM and prescription, had planned ED visit or previous enrollment. Baseline characteristics included age, sex, identification of a PCP, primary language, ethnicity, marital status, and insurance status. Review of the electronic record (EMR) was used to determine if patients returned to the ED within 30 days, and if repeat visits were for the same complaint. We also recorded subsequent visits to other providers in the university health care system within 30 days. Relative risk was assessed for the two groups. P<0.05 was considered significant. The study was powered for a 20% difference between groups. Results: 220 eligible patients were included, of whom 30 were excluded (10 previously included, 8 planned ED visit, 6 discharged to "other than home", 3 duplicate, 3 admitted). Among the 190 subjects, 65 received THM and 125 paper prescriptions. No significant differences were found between groups in baseline characteristics. Patients receiving THM were significantly more likely to return within 30 days than those receiving prescriptions (26.2% vs. 14.4%, diff=11.8%, 95% CI=0,24). There was no significant difference in return rates when only returns for a related diagnosis were considered (12.3% vs. 8%, Diff=4.3%, 14) . Conclusion: Subjects who received THM returned to the ED twice as often within 30 days as those receiving prescriptions. Further research is needed to explore the reasons underlying the identified patterns of return visits. Results: Thirty-two DKA patients were included in the study. The median age was 42 and 38% were male. The median thiamine level was 14 nmol/L (IQR 8.5-18) and 8 patients (25%) were found to have absolute thiamine deficiency with levels < 9 nmol/L. Six out of 12 patients (38%) identified as African-American had absolute thiamine deficiency. The median lactate levels in the thiamine deficient and sufficient groups were 5.8 (2.1-7 .7) and 2.1 (1.1-3.7) respectively (p=0.005). A statistically significant inverse association between lactic acid levels and blood thiamine levels was found (r=-0.494, p=0.009) ( Figure 1 ). This relationship remained significant after adjustment for severity of illness using the APACHE II score (r=-0.5, p=0.04). A direct relationship between thiamine levels and admission serum bicarbonate was noted (r=0.5, p=0.002). ( Figure 2 ) Patients with thiamine deficiency had longer median hospital LOS (6 vs 3 days), but this difference did not reach statistical significance (p=0.24). Conclusion: Thiamine deficiency is prevalent in DKA and associated with higher lactate levels and lower bicarbonate. Study of thiamine supplementation in DKA is warranted. Objectives: The objective of this study was to demonstrate interprovider variability in pressures generated during initial flush procedures. Methods: Fifteen emergency physicians trained in IO access procedures performed 60 flushes (10 cc saline) in random order in two cadavers (Cadaver #1, N = 9; Cadaver #2, N = 6). IO cannulas (15G, EZ IOâ, Vidacare Corp, San Antonio, TX) were inserted into the proximal tibiae and proximal humeri. A second cannula was placed in the mid diaphysis of each bone to record intramedullary pressures. Providers were blinded to their flush pressures and the flush techniques of others. Results: The median IO pressure (IOP) generated by providers at all sites was 904 mmHg (range 83 -2942 mmHg) and flush duration was 5.2 sec (range 1.0 -13.4 sec). Significant differences were noted among providers in peak IOP (P = 0.03). Providers were consistent in the relative forces they generated at each flush site in spite of order randomization. An inverse nonlinear relationship was observed between flush duration (t) and the peak IOP generated (Ppak IOP = 2272.3e -0.225t , R² = 0.470, P < 0.001). The table presents significant differences in intramedullary flush pressures at left and right proximal tibiae (LPT and RPT) and left and right proximal humeri (LPH and RPH). The IO compartment pressures generated by providers demonstrated significant inter-operator variability with greater than 35fold difference in flush forces. Although it has been established that flushes are necessary to achieve adequate IO flows, optimal practices for flush procedures have not been established. Our study suggests that flush practices can be refined to reduce risks associated with high intraosseous pressures by controlling the duration of flush. (The opinions or assertions contained herein are the private views of the authors and are not to be construed as reflecting the views of the Department of the Army or the Department of Defense.) Objectives: We hypothesized that central venous pH and pCO2 is more reliable than peripheral venous pH and pCO2 when compared to arterial pH and pCO2. Our objective was to determine if there is a difference between central pH and pCO2 and peripheral pH and pCO2 when compared to arterial pH and pCO2 in an undifferentiated critically ill patient population. We performed a prospective cohort study of patients in the emergency department (ED) and intensive care unit (ICU) at a single academic tertiary referral center. Patients were eligible for enrollment if the treating physician ordered an ABG. Statistical analysis of the data from the ABG and VBG was performed using paired t-test, Pearson's chi-square, and Pearson's correlation. Background: Infection-related deaths are a leading cause of morbidity and mortality in the US, affecting over 1 million people a year, and costing $17 billion annually. Understanding the determinants of infection-related death rates has improved, yet uptake of evidencebased interventions remains sub-optimal. An understanding of the geographic distribution of infection-related death rates has been extremely limited. Objectives: We tested the hypothesis that defined areas of the US could be identified where infection related death count and rates were disproportionately high ("hot spots"). Death data files (National Center for Health Statistics) were combined with 2010 Area Resource File demographic data. Infection-related deaths were identified using previously described ICD-10 primary cause of death codes for infection. Local tests of spatial autocorrelation (LISA statistic) were conducted in ArcGIS software to identify the existence and location of disproportionate areas. "Hot spots" and "cool spots" were defined as regions where the infection death rate was significantly higher or lower than the national mean and surrounding counties. Results: The US infection and severe sepsis related death rates were found to be 36.6/100,000 and 7.1/100,000, respectively. The analysis revealed two "hot spots": 1. 196 counties (5.8%) with an infection related death rate of 104.2/100,000, or three times the national mean, located across the Midwest and mid-Atlantic US (p<0.001); 2. 285 counties (8.4%) , with a severe sepsis death rate of 28.8/100,000 or four times the national rate, located in the South and Mid-Atlantic (p<0.001). A "cool spot," a cluster of 157 counties (4.6%) with low death rates (0.9/100,000), was located across the Southwest and Mountain states, P<0.001 (figure). do not have injuries requiring hospital admission, persistent musculoskeletal pain after MVC is a common and debilitating problem. Data from other settings suggest that recovery from an acute painful condition takes longer in older adults than in younger adults. Objectives: We hypothesized that among ED patients with moderate or severe pain due to MVC, the decrease in pain severity during the first six weeks after MVC would be less for patients age 65 years or older than for younger patients. Methods: We analyzed data from a prospective study of adults presenting to one of eight EDs after MVC without fracture or injury requiring admission. Pain severity was evaluated in-person in the ED and by phone six weeks after the MVC using a 0-10 scale. Multivariable linear regression was used to assess the relationship between patient age group, represented as 10-year interval categories, and pain recovery, defined as the change in pain severity from the ED to the six week assessment, adjusting for patient sex, extent of vehicle damage, and pain in the month prior to the MVC. Results: Of 699 patients with complete ED and six week data who had not hired a lawyer at 6 weeks, 535 patients (77%) had moderate or severe pain (pain score 4 or more) in the ED. Pain in the ED was more frequently rated as moderate or severe by younger adults: age 18-24=81%; 25-34=80%; 35-44=78%; 45-54=73%; 55-64=67%; over 65=67%; p<0.005. Among these patients, pain recovery decreased with advancing age group (p<0.001). After adjusting for patient sex, extent of vehicle damage, and pain in the month prior to the MVC, the relationship between age group and pain recovery persisted, with generally lower mean decreases in pain after six weeks for older compared to younger age groups: age 18-24, mean decrease in pain score = 3.1 points (95% CI 2.7-3.6); 25-34 = 3.0 (2.5-3.4 ); 35-44=1.9 (1.3-2.4 ); 45-54 = 2.3 (1.7-3.0); 55-64 = 2.1 (1.3-2.8) ; 65 and older = 1.8 (0.9-2.7); p<0.005. Conclusion: Although older adults were less likely to report moderate or severe musculoskeletal pain at the time of the ED assessment, among those with moderate or severe pain, less pain recovery occurred during the first 6 weeks after MVC for older than for younger adults. Background: Older adults are at high risk for untreated and undertreated pain. A pharmacologic regimen to treat acute pain in older adults must combine clinically satisfactory analgesia with sufficient safety precautions, minimizing the incidence of potentially serious side effects. This is especially important for older adults because both pharmacokinetics and pharmacodynamics may be altered by age and comorbidity. Objectives: To compare a rapid, two-step, hydromorphone titration protocol against usual care in older ED patients with acute severe pain. Methods: ED patients aged 65 years and older with severe pain were randomly allocated to the hydromorphone titration protocol or usual care. Hydromorphone titration protocol patients initially received 0.5 mg IV hydromorphone. Usual care patients received any dose of any IV opioid. At 15 minutes, both groups were asked, "Do you want more pain medication?" Patients in the hydromorphone titration group who answered "yes" received a second dose of 0.5 mg IV hydromorphone. Patients in the usual care group who answered "yes" had their treating attending physician notified, who then could administer any (or no) additional medication. The primary efficacy outcome was satisfactory analgesia defined as the patient declining additional analgesia at least once when asked at 15 or 60 minutes after administration of initial opioid. Dose was calculated in morphine equivalent units (MEU: 1 mg hydromorphone = 7 mg morphine). Need for naloxone to reverse adverse opioid effects was the primary safety outcome. Results: 83.0% of 153 patients in the hydromorphone titration group achieved satisfactory analgesia compared to 82.5% of 166 patients in the usual care group (difference 0.5%, 95% CI: -7.9% to 8.8%). Patients in the hydromorphone titration group received lower mean initial doses of opioids at baseline than patients in usual care (3.5 MEU vs. 4 .7 MEU; difference -1.2 MEU; 95% CI: -1.5 to 0.9 MEU) and lower total opioids through 60 minutes (5.3 MEU vs. 6.0 MEU; difference -0.7 MEU; 95% CI: -1.4 to 0.1 MEU). No patient needed naloxone. Conclusion: A hydromorphone protocol using low-dose titration of IV hydromorphone in increments of 0.5 mg provides comparable analgesia to usual care with less opioid over 60 minutes in older adults presenting to the ED with acute severe pain. analyses if they had complete data for patient age, race, sex, pain severity, and duration of emergency medical services treatment. The two outcomes examined were the receipt of any analgesic and the receipt of an opioid analgesic. Logistic regressions were used to estimate odds ratios for receipt of analgesics or opioids for older versus younger patients stratified by sex and pain severity and controlling for race, treatment duration, and whether the patient had experienced trauma. Results: Complete data were obtained for 407,763 transports, including 186,776 transports of patients age 65 years or older. Older males were less likely than younger males to receive an analgesic or an opioid regardless of pain severity. In females, the relationship between patient age and receipt of analgesics depended on patient severity. Among females with mild or moderate pain, older females were less likely than younger females to receive either form of pain treatment (figure). Further, among females with mild or moderate pain, the oldest patients (age 85 and older) were the least likely to receive any analgesic or an opioid, but among females with severe pain the oldest patients were the most likely to receive pain treatment. The observed interaction between age, pain severity, and sex remained in the subset of patients with abdominal or back pain. Carolina in 2011, older patients were less likely to receive pain treatment than younger patients except for older females with severe pain, who were more likely to receive pain treatment than younger females. Objectives: We examined the hypothesis that Lean-based reorganization of Fast Track (FT) process flow would improve length of stay (LOS), percent of patients discharged within 1 hour, and room utilization, without added expense. Methods: This study is a prospective, controlled, before-and-after analysis of FT process improvements in a Level I tertiary care academic medical center with >95,000 annual patients visits. All adult patients seen during the study periods of 6/2010-10/2010 and 6/2011-10/2011 were included, and data were collected from a computerized tracking system. Concurrent patients seen in another care area (START) were used as a comparison. The intervention included a simple reorganization of patient flow through existing FT rooms, based in systems engineering science and modeling, including queuing theory, demand-capacity matching, and Lean methodologies. No modifications to staffing or physical space were made. Primary outcomes were LOS of discharged patients, percent of patients discharged within 1 hour, and time in exam room. Patient characteristics were compared before and after the intervention to ensure lack of inherent bias between study groups. LOS and exam room time were compared using Wilcoxon rank sum tests, and chi-square tests for percent of patients discharged within 1 hour. The table provides demographic data for the study population. Following the intervention, median LOS of discharged patients was reduced by 15 minutes (158 to 143 min, 95%CI 12 to 19 min). The number of patient discharged in <1 hr increased by 2.8% (from 6.9% to 9.7%, 95%CI 2.1% to 3.5%), and median exam room time decreased by 34 minutes (89 to 55 min, 95%CI 32 to 38 min). In comparison, patients seen in START had no change in median LOS (265 to 267 min) or in proportion of patients discharged in <1 hr (2.9% to 2.9%). Conclusion: In this single center trial, simple Lean-based reorganization of patient flow was associated with improved ED performance measures and capacity, without added expense. Broad, multi-centered application of systems engineering science might further improve ED throughput and capacity. Background: The goal of emergency medicine clinicians and administrators is to provide high-quality clinical care and also excellence customer service. Providing guest relations associates (GRAs) to improve the customer experience with non-clinical needs for patients and their families during their emergency department (ED) visit may improve patient satisfaction. GRAs provide patients' family members access to private family waiting areas, snacks, a newspaper, or simply an update on the status of their care. Objectives: The objective of this research project is to evaluate the effectiveness of GRAs on patient satisfaction. Methods: Telephone survey of all ED patients identified as having a primary care physician on staff (loyalty patient) at a large (>100,000 ED visits/yr) urban teaching hospital between May and November 2012. After discharge all loyalty patients were surveyed using a simple telephone survey regarding their recent experience at our ED. Respondents were asked, "How likely are you to recommend this ED to your friends and family? " and "How would you rate the overall care your received from the ED?" and responded based on a 10-point scale. Data were analyzed using Student's t-test. All loyalty patients, regardless of whether they were visited by a GRA, were surveyed to measure whether those receiving this service are more or less likely to recommend the ED and whether there was a perceived difference in overall care. Results: Of the 622 patients surveyed, 387 (62%) were seen by a GRA. Using a 0 to 10 scale (10 = most likely) of likeliness to recommend the ED, patients seen by a GRA reported an average score of 9.66/10, while those not seen by a GRA reported an average 8.643/10. (p<0.05) Patients seen by a GRA rate their overall care as 9.42/10, while those not seen by a GRA report an ED overall care rating of 8.50/10. (p<0.05) Conclusion: Research suggests customer service in the ED is important for the overall perception of the hospital as many patients may find the ED as their sole interaction with the hospital. Additionally, both CMS withholds and opportunity to capture revenue and market share based on patients' experiences create additional incentive to maximize the patient experience. The use of GRAs in the emergency center has been an effective tool in improving the likelihood of recommending the EC and improving patients' overall perceptions of quality of care. Objectives: To compare physician productivity and billing before and after the transition to an EMR in an academic ED. The null hypothesis is that there are no differences in productivity and billing related to EMR implementation. Methods: This observational pre/post study compares data collected on productivity and charges before and after the transition to an EMR system, which occurred on June 13, 2012. The 3 months prior to implementation of the EMR (March-May 2012) were compared to the 3 months after (July-Sept 2012). The month of June was omitted as a "wash-in" period to allow providers to become acquainted with the system. Data from 31 ED physicians were included, with each physician acting as his or her own control. Productivity was measured two ways for each individual: (1) the total number of encounters, and (2) the worked relative value units divided by the clinical full-time equivalent (wRVU/cFTE). Charges were a compilation of bills sent to all patients seen by each physician. Values for charges, encounters, and wRVU/ cFTE were determined for the total care of patients during each study period and separately for procedures, observational stays, and critical care. Comparison was made using a paired t-test. Results: The table shows the results of a pre-post comparison of charges and productivity for total care, procedures, observation stays, and critical care time. Statistically significant decreases were seen in productivity and charges in both total care (15-17% and 13%, respectively) and observation stays (53-58% and 53%, respectively). There was a statistically significant (19%) increase in charges for procedures. No change was seen in critical care billing however an upward trend was noted. Conclusion: The implementation of the EMR is associated with a significant decrease in productivity and billing for total ED care and observation stays, but with a significant increase in charges for procedures and a trend toward an increase in billing critical care cases. EDs must be prepared for significant changes in physician productivity and charges when transitioning to an EMR. particularly telemetry beds, is often constrained. We created a streamlined "Rapid Rule Out" pathway intended to address this shortage by decreasing length of stay (LOS) for a select group of chest pain patients. Low-to intermediate-risk patients are transferred to the Stress Lab and discharged directly without return to the ED if their exercise treadmill test (ETT) is normal. Objectives: We evaluated differences in LOS, charges, and outcomes between the accelerated and traditional pathways Methods: This was a prospective observational study of 64 consecutive patients who underwent 6-hr observation in the ED and single cardiac biomarker testing. A group of 70 consecutive patients who underwent 23-hr observation and serial cardiac biomarker testing was used as a historical control group. Both groups included risk stratification by ETT. Baseline characteristics, LOS, hospital charges, and one-month outcomes were recorded. Categorical variables were analyzed with chi-square test; continuous variables were analyzed using t-test or Mann-Whitney test as appropriate. All calculations were done using SPSS v.20. Results: Hyperlipidemia was more prevalent in the retrospective group. Otherwise, baseline characteristics were similar in both groups. Compared to the 23-hr observation, 6-hr observation was associated with significant reduction in LOS and total charges with comparable readmission rates and outcomes ( Background: EDs are focused on improving patient throughput and efficiency. Lean process has been described as an improvement tool for reducing waste and adding value in the health care field. Objectives: Assess change in ED length of stay (LOS) for low-acuity patients, emergency severity index (ESI) score of 4 or 5, in a Super Track (ST) after Lean process improvement. Methods: This was a single-center, retrospective before-after study evaluating two 10-month periods: January 4 to November 3, 2011 prior to intervention and January 4 to November 3, 2012 after intervention. Similar dates were used to avoid seasonal confounders. A multidisciplinary team created a new ST process for low acuity patients. Thirty process changes were made using Lean methodology. The ST operates 12 or 14 hours a day, 7 days a week, based on volume curves. All patients presenting with an ESI 4 or 5 were included. Patients presenting for routine dialysis were excluded. ED LOS and left without being seen (LWBS) data were abstracted using the electronic medical record. Data were analyzed using Wilcoxon rank sum and chi-square tests where appropriate. Results: A total of 24,520 patient encounters were analyzed, 12,880 patients in the pre-intervention and 11,640 in the post intervention periods. Post-intervention, median total ED LOS decreased by 38 minutes representing a 37% reduction (median time of 103 minutes in the pre-intervention period compared with median time of 65 minutes post-intervention, p = 0.002). During the post-intervention period, 8008 patients were processed through ST compared with 3542 patients during non-ST hours. Median ED LOS was 33 minutes less during ST hours compared with non-ST hours representing a 36% reduction (median time of 59 minutes for ST compared with median time of 92 minutes for non-ST, p < 0.001). Total LWBS decreased from 219/12,880 (1.7%) in the pre-intervention period to 90/11,640 (0.8%) postintervention (p < 0.001). During ST operation, LWBS was 18/8026 (0.2%) compared with 72/3614 (2.0%) during non-ST hours (p < 0.001). Conclusion: Creation of a ST through a Lean process was associated with a significant decrease in ED LOS and LWBS for lowacuity patients. The benefit of improved ED LOS and LWBS was directly correlated with ST hours. In the era of increased ED volumes and ED crowding, Lean process can be implemented to improve efficiency in the ED for low-acuity patients. Objectives: To determine if real-time RVU tracking changes physician productivity in the absence of financial incentive in an academic environment. Methods: Physicians in a 26-physician academic group (urban, 90 ,000 visits/year) receiving base pay without incentive were informed that their productivity in terms of work RVU/hour would be tracked on a monthly basis. Physicians were informed that this information would be to establish a benchmark and develop future incentive-based pay. Physicians underwent 3 months of orientation and billing lectures with RVU data provided. Data were collected prospectively for 9 months (tracking period) and compared to the same 9 months in the preceding year (control period). Linear regression was used to identify predictors of productivity including, age, sex, years in practice, workload, and administrative position. Results: Physicians were on average 40 years old, 58% male, with a mean practice time of 11.6 years (range 3, 26). In the control period, Objectives: To evaluate the association between type of health insurance and reasons for ED visit among discharged patients. Health Interview Survey who had ED visits within the past 12 months. We focused on the reported reason(s) for presentation among those who were discharged after their most recent ED visit (n=4,606). The survey queried seven specific reasons for the most recent ED visit, which we classified as relating to high acuity (three items) or limited access to alternate care (four items). We analyzed the survey-weighted data using multivariable logistic regression models to test the association between health insurance type and reasons for ED visits, adjusting for demographics, education, defined source of primary care, and self-rated health status. Results: Overall, 65.0% (95%CI, 63.0-66.9) of adults reported ! 1 high-acuity issue and 78.9% (95%CI, 77.3-80.5) reported ! 1 access issue. The most common high-acuity issue was "only hospital could help" (55.4%) and the most common access issue was "doctor's office was not open" (49.3%). Among those who reported no high-acuity issue motivating the most recent ED visit, 83.9% (95%CI, 81.5-86.0) reported ! 1 access issue. After adjusting for covariates, reports of high-acuity issues were similar by type of health insurance. Adults without a defined source of primary care were less likely to report ! 1 high-acuity issue (OR 0.74, . Compared to adults with private insurance, ! 1 access issue was more likely reported by those with Medicaid only (OR 1.50, 95%CI, 1.06-2.13), Medicaid + Medicare (OR 1.94, 95%CI, , and without insurance (OR 1.45, 95%CI, 1.08-1.96). Adults without a defined source of primary care were also more likely to report ! 1 access issue (OR 1.45, 95%CI, 1.08-1.95). These access results were not substantively different when those with ! 1 high-acuity issue were removed. Conclusion: Variability in ED utilization rates among discharged patients by type of insurance may be driven primarily by access to alternate care, rather than variability in perceived acuity. This may represent a significant opportunity to intervene and reduce some potentially avoidable ED visits. Objectives: To characterize variability in the supply of UCCs by U.S. state and the association between UCC supply, ED supply, and population characteristics of each state. Methods: For this cross-sectional study, we obtained a comprehensive database for the 8,977 U.S. UCCs from the Urgent Care Association of America and for the 4,967 U.S. EDs from the National ED Inventory. Location was classified by U.S. census region, state, and rural-urban status. Population characteristics for each state were obtained from the U.S. Census Bureau's Current Population Survey. We analyzed data using descriptive statistics and linear regression to characterize the association between UCC supply and population characteristics. Results: Overall, U.S. states had a median 3.07 (range 1.32-5.56) UCCs per 100,000 population and median 1.85 (range 0.83-6.76) EDs per 100,000 population. The ratio of UCCs per ED in each state varied notably (median 1.61; range 0. 26-4.30 ). The number of UCCs per 100,000 population was lowest in the Northeast (median 1.7) and highest in the West (median 3.3). Similar regional variability was not observed for the number of UCCs per ED. Higher numbers of UCCs per 100,000 population were associated with higher rates of uninsured (b 0.069, 95%CI 0.002 to 0.135) and lower rates of privately insured (b -0.054, 95%CI -0.005 to -0.103) in that state; there was no association with overall population size, urban density, or proportion of the population with public insurance. Higher numbers of UCCs per ED were associated with higher population size (b 4.6, 95%CI 0.69 to 8.4), greater urban density (b 3.3, 95%CI 1.9 to 4.9), and higher rates of uninsured (b 0.078, 95%CI 0.012 to 0.14). UCCs were more concentrated in urban (3.2 per 100,000 population) than rural areas (2.4 per 100,000 population). Conversely, EDs were more concentrated in rural (4.2 per 100,000 population) than urban areas (1.2 per 100,000 population). The distribution of UCCs is characterized by marked geographic variation, with higher density of centers in urban areas and in states with high uninsured rates. Future studies should evaluate the effect of local regulations on distribution of UCCs and the effect of UCCs on payer and acuity case mix for surrounding EDs. requires identifying a pathological diagnosis. Consistent with anecdotal reports, a recent pilot study at one institution found that many patients discharged from the ED do not receive a pathological diagnosis, but rather are given a "diagnosis" that reiterates their symptoms (Wen et al., Emerg Med J 2012) . Objectives: We analyzed 17 years of data from the National Hospital Ambulatory Medical Care Survey (NHAMCS) to identify the proportion of patients who receive a pathological diagnosis at ED discharge. We hypothesized that many patients do not receive a pathological diagnosis, and that the proportion of pathological diagnoses has increased between 1993 and 2009. Methods: Using the NHAMCS data from 1993-2009, we analyzed visits of patients age ! 18 years, discharged from the ED, who had presented with the three most common chief complaints: chest pain, abdominal pain, and headache. Discharge diagnoses were coded as symptomatic versus pathological based on a pre-defined coding system agreed upon by two emergency physicians. We compared weighted annual proportions of pathological discharge diagnoses with 95% CIs and tested them for trend with logistic regression. Results: Among the 299,919 sampled visits, 44,742 visits met inclusion criteria. This allows us to estimate that there were 164 million adult ED visits in the U.S. during this period (95% CI, 151-178 million) presenting with the three most common chief complaints who were discharged home from the ED. For these patients presenting with chest pain, abdominal pain, or headache, the proportions of visits with pathological discharge diagnosis were 55%, 71%, and 70%, respectively (table) . The proportion of all three pathological discharge diagnoses decreased between 1993 and 2009. Conclusion: According to our analysis of nationally representative ED visits, many patients are discharged from the ED without a pathological diagnosis that explains the likely cause of their symptoms. Despite advances in diagnostic testing and technology, the proportion of pathological discharge diagnoses has decreased. Future studies should investigate the reasons for not providing a pathological diagnosis, and examine whether provision of a pathological diagnosis affects patient satisfaction and clinical outcomes. Objectives: Test the hypothesis that Pretest Consult, a validated, computerized method to estimate the pretest probability of both ACS and PE, could safely reduce radiation and cost exposure. Methods: Four-center, prospective, randomized trial of medical device efficacy. Inclusions: age>18, charted evidence of chest pain and dyspnea with a nondiagnostic electrocardiogram. Exclusions: known ACS or PE, pre-arrival plan for admission, cocaine use, pregnancy, and social situation precluding follow-up. Clinicians entered patient data directly into Pretest Consult (the device) which randomly assigned to device output or sham. Device output showed estimated probabilities of ACS and PE, linked to specific diagnostic recommendations to minimize radiation exposure and to produce a posterior probability <1%. Shams received nothing. Protocol-defined 90-day outcomes: 1. dose (mSv) of chest radiation exposure; 2. serious adverse events (SAEs) including delayed diagnoses; 3. Same-day admission rate and length of stay (LOS); 4. readmission rate; 5. direct costs and charges; 6. patient satisfaction. Sample size of 550 estimated to find a 10% difference in proportion with >5mSv chest radiation with a=0.05 and b=0.20. Two-sided Ps from Mann Whitney U or Exact test. Results: Data were complete for 540 patients. Means of ages and pretest probabilities were well matched between groups. Within 90 days, 15 (2.7%) patients had ACS and 9 (1.7%) had PE. The following compares the device (n=264) vs sham (n=276) groups: No ACS, PE, or any other significant CP diagnosis was found in 219 (83%) vs 229 (83%); median chest radiation dose was 0.12 vs 0.62 mSv (P=0.04) and 33% vs. 42% of patients with no significant CP diagnosis had >5 mSv (P=0.06); 90-day rate of SAEs was 11% vs. 16% (P=0.06), rate of delayed diagnosis of ACS or PE was 0.4% in each group; admission rate was 49 vs 53% and LOS was 495 min vs 544 min (P=0.2), readmission rate was 8.0 vs 10.5% (P=0.3); Median costs were $949 vs $1259 (P=0.03) and charges were $6299 vs $7565 (P=0.006); Patients answered all questions "very satisfied" in 20 vs 22% (P=0.4). Consult into the emergency care of patients with chest pain and dyspnea was safe and resulted in lowered radiation exposure and cost over the next 90 days. Objectives: Compare two clinical decision rules for stroke diagnosis in acute vestibular syndrome (AVS). Methods: Cross-sectional comparison in a prospective series of high-risk patients with AVS (acute, persistent vertigo/dizziness with nystagmus plus nausea/vomiting, head-motion intolerance, new gait unsteadiness) at a single center. All underwent neuro-otologic exam and neuroimaging (97% MRI). Results of a three-component eye movement battery (HINTS-Head Impulse, Nystagmus, Test of Skew) +/new hearing loss (HINTS 'plus') were compared to ABCD2 risk scores (0-7 points for age; blood pressure; clinical features; duration of symptoms; diabetes), using the recommended cutoff ! 4 for stroke. We assessed sensitivity, specificity, likelihood ratios (LR+, LR-) for stroke, central causes by final neuroimaging (comparison by chi-square statistic). We projected accuracy and imaging costs if MRIs were ordered based solely on decision rule results and compared missed stroke, nondiagnostic MRIs, and costs. Results: We analyzed 187 consecutive adult AVS patients (1999) (2000) (2001) (2002) (2003) (2004) (2005) (2006) (2007) (2008) (2009) (2010) (2011) (2012) Objectives: To determine whether CDRs predict short-term outcomes related to PE occurring during a typical hospitalization. Methods: This was a prospective, observational study of a consecutive sample of emergency department (ED) patients with radiographically proven PE from 10/08-12/11 in an academic center with 95,000 annual ED visits. In the ED, we collected data required to calculate the pulmonary embolism severity index (PESI), the simplified PESI (sPESI), and the Geneva Prediction Rule. We followed each patient for five days for clinical deterioration or need for hospital-based intervention: ACLS, new cardiac dysrhythmia; hypoxia or need for respiratory support; hypotension, vasopressor therapy, thrombolytic therapy, recurrent PE, or death. Post-discharge follow-up was based on 5-and 30-day telephone contact and record review. We enrolled 298 patients with PE. Mean age was 59 (AE17 years); 152 (51%) were male, and 268 (90%) were white race. Most (n=250, 84%) were admitted to non-ICU floors and the median length of stay was three days. Ninety-nine (33%) patients clinically deteriorated or required a hospital-based intervention in the first five days following PE diagnosis, most commonly: hypoxia or need for respiratory support (n=58, 19%) and hypotension (n=35, 12%). Seven (2%) patients developed bleeding. One patient died within five days and 12 within 30 days. The sensitivity and negative predictive value of all CDRs were only moderate, with the PESI and sPESI being more sensitive (69% and 81%, respectively) and the Geneva Prediction Rule being more specific (83%). and mortality. Ventilator-associated pneumonia (VAP) is a pneumonia not present at the time of intubation that develops after 48 or more hours of mechanical ventilation and is considered preventable. In the ICU setting, relatively straightforward patient care interventions have been shown to reduce the incidence of VAP. Tools such as information packets, posters, and competency testing provided to ICU nurses and respiratory therapists significantly decreased the number of patients who acquired VAP. To date, none of these studies have focused on VAP prevention education in the ED. Objectives: To determine whether a brief educational intervention could improve the knowledge of ED personnel in VAP prevention. Methods: After obtaining IRB exemption, we performed a pre-/poststudy in our ED. Participants completed a 10-question test assessing knowledge of VAP, followed by a PowerPoint-based intervention, and a 10-question post-test. Pre-and post-test scores were compared using a paired t-test. A and B versions of the test were used in a crossover fashion and their equivalence was demonstrated using an unpaired t-test. Results: Sixty-five subjects were enrolled. Mean difference between pretest and post-test was 25 +/-19 points (p<0.01). There was no statistically significant difference between the A (p=0.61) and B (p=0.37) versions when administered as pre-or post-test. We demonstrate knowledge transfer in ED personnel regarding VAP prevention after a brief educational intervention. This will be implemented as part of a larger study to reduce VAP in the ED using a VAP bundle. Figure 1 ). Coronary artery bypass grafting (CABG) causes a major stress response and has been used as a model of critical illness in previous studies. Objectives: To determine if PDH activity and thiamine levels are affected by major stress. We hypothesize that the major stress of undergoing CABG will deplete thiamine levels and decrease PDH activity. Methods: Prospective, observational study at an urban, tertiary care hospital. We enrolled consenting adults who were about to undergo CABG. Blood was obtained prior to surgery, after surgery, and 6 hours after surgery. We measured PDH activity using a novel method: PDH was solubilized from lymphocytic mitochondria, immunocaptured by antibodies, and then subjected to functional and quantitative microplate assays. Repeated measures analysis was used to evaluate changes in PDH, thiamine levels, and the associations between these variables over time. Fisher's exact test was used to determine thiamine deficiency before and after surgery. Pearson correlation coefficients were obtained between lactate and thiamine levels. Results: Fourteen patients were enrolled (age: 67.3 AE 9.8 years, 21 % female). Thiamine levels were lower after surgery (9.1 AE 1.1 nmol/L) and six hours after surgery (9.1 AE 0.9 nmol/L) as compared to pre-surgery levels (13.5 AE 1.7 nmol/L, p <0.0001). Eight patients were thiamine deficient ( 7 nmol/L) after surgery and 6 patients were deficient 6 hours after surgery, compared to no patients before (p = 0.002 and 0.02). PDH activity was decreased 53.3 AE 6.6% after surgery and 35.8 AE 8.8% six hours after surgery as compared to before surgery (p <0.0001, Figure 2 ). The amount of PDH protein was decreased 70.7 AE 10.7% after surgery and 37.9 AE 20.1% six hours after surgery. The loss of PDH activity was correlated with loss of PDH protein (r=0.42, p=0.02). PDH activity was associated with thiamine levels (p=0.04). Lactate postsurgery was inversely correlated with thiamine but did not reach statistical significance (r=-0.55, p=0.054). Conclusion: The profound stress of major surgery causes depletion of thiamine levels and decreased PDH activity. These findings are applicable to emergency medicine as this essential metabolic pathway deficiency may occur in critically ill patients in the ED setting. Objectives: We sought to determine whether prehospital initiation of therapeutic hypothermia improved one-year survival in critically ill patients who survived hospitalization after cardiac arrest. Methods: This was a prospective observational study of all cardiac arrest patients treated in a comprehensive post-cardiac arrest clinical pathway that included therapeutic hypothermia from November 2007 through September 2011. On April 1, 2009 our EMS system began intra-arrest prehospital cooling with 4°C normal saline on all cardiac arrest patients. Prior to 2009, there was no prehospital cooling protocol. All patients were enrolled following admission to an urban academic medical center. Health care system electronic health records and the social security death index (SSDI) were queried to determine the survival of subjects at one year post-arrest. Results: A total of 132 patients were enrolled; 80 patients (61%) received prehospital fluids, while 52 patients (39%) did not. Hospital survival with good neurological outcome at time of discharge was observed in 49% who received prehospital cooling and 44% who were not cooled prehospital (p=0.61). Longitudinal one-year survival was available for 128 patients. The remaining 4 censored patients were alive but had not reached the one year analysis threshold. Survival at one year from hospital discharge was 49% in patients who received prehospital cooling and 42% in those who did not (p=0.46) Among patients discharged from the hospital with good neurologic function, 95% in the prehospital cooling group and 87% in the no prehospital cooling group remained alive at one year (p=0.30). Conclusion: While limited by a small sample size, this analysis failed to detect a significant difference in one-year mortality among those who received prehospital cooling compared to those who did not. Background: Currently the only FDA approved lytic therapy for treatment of acute ischemic stroke is tissue plasminogen activator (tPA). However, tPA carries a risk of bleeding, and symptomatic and asymptomatic intracranial hemorrhage are observed in 6% of patients. There is a critical need for a safer and more efficient thrombolytic. In vitro and animal re-bleeding studies have demonstrated better safety for plasmin compared to tPA. Plasmin inhibitor alpha 2-antiplasmin is present in high concentrations, and rapidly deactivates plasmin even at dosages up to six times that required for clot lysis, thus reducing ICH risk. However, this rapid inhibition is also the main drawback in the IV administration of plasmin. Our approach is entrapping plasmin in echogenic liposomes (PELIP) . ELIP are micron-sized lipid shells with gas microbubbles which enhance ultrasound (US) reflectivity, and enable US-triggered drug release. Objectives: The main objective was to measure the lytic efficacy of PELIP in an in vitro human whole blood clot model. Our primary hypothesis was that thrombolysis by PELIP will be at least as effective as thrombolysis by rtPA at the NINDS therapeutic dose of 1 lg/ml. Methods: PELIP were manufactured from a phospholipid mixture using a batch process. Plasmin encapsulation, size distribution, US reflectivity, and size of microbubbles encapsulated in PELIP were measured. Lytic efficacy was measured using a well-established microscopic technique that uses an in vitro human whole blood clot model. The percent decrease in clot width at 30 minutes (FCL) was used as a marker of lytic efficacy. Ultrasound parameters used were: 120 kHz, 0.35 MPa pressure amplitude ,1667 Hz, and a 50% duty cycle. Results: After 30 minutes of treatment, the PELIP (US+) treated clots showed significantly greater clot lysis than rtPA treated clots, (p<0.01, Student's t-test, see table). The average thrombolytic efficacy of US mediated PELIP administration is greater than that of rt-PA at the NINDS therapeutic dose of 1 lg/ml. Background: CT scan remains the first-line imagining modality for suspected renal colic but is expensive and involves ionizing radiation. Pre-test probability (PTP) has been used in imaging decision rules (PE, DVT) and incorporation of PTP into a clinical tool for kidney stones may be able to reduce unnecessary radiation. Objectives: To assess predictive value of PTP for kidney stone (KS) as cause of pain (COP) on unenhanced CT scans of the abdomen and pelvis (FPP CT) in ED patients with suspected renal colic. Methods: Prospective observational study of consecutive adult ED patients undergoing FPP CT. Providers requesting CT scans using computerized physician order entry submitted answers to the following: What is the probability this patient has a kidney stone causing their pain (low<25%, moderate 26-75%, high>75%) and were results of urinalysis (UA) and/or point-of-care ultrasound (US) known prior to PTP estimation? Final CT diagnosis was extracted from dictated radiologist reports and medical records were reviewed for all CTs with non-kidney stone (NKS) or uncertain findings to determine if intervention was performed in the ED. Results: 385 patients were enrolled from May 2011 to April 2012 with a median age of 44 (IQR: 33-57), 53% female and 81% white. The distribution of low, moderate, and high PTP was 8.8%, 44%, and 47%. UA and US were performed in 78% and 93% of patients, and positive in 59% and 38% respectively. Among 239 (62%) patients with diagnostic CTs, 196 (51%) had KS COP and 33 (8.6%) NKS COP requiring intervention. The most common NKS COP were pyelonephritis (10), diverticulitis (8) , and cholecystitis (3) . Conclusion: ED provider estimates of PTP were predictive of kidney stone as cause of pain on FPP CT. The incidence of alternate findings requiring intervention in patients with high PTP was low, particularly in patients with hydronephrosis. Objectives: The objective of this study was to determine if a normal renal US could identify renal colic patients who did not require urologic intervention within 90 days of their initial emergency department (ED) visit. Methods: This was a prospective cohort study involving adult patients presenting to the EDs of a tertiary care center with suspected renal colic over a 20-month period. Results of renal US were categorized into four mutually exclusive groups: "normal," "suggestive of ureterolithiasis," "visualized ureteric stone," or "disease unrelated to urolithiasis." Electronic charts were reviewed to determine if patients received urologic intervention within 90 days of the initial ED visit. Objectives: To compare changes in the in-training exam scores of low-performing residents who complete a personalized remediation program to changes of residents with matched initial exam scores from the years prior to the initiation of a remediation program. Methods: All residents who scored two standard deviations below the national mean on the in-training exam were placed into a remediation program. The program consisted of a weekly reading assignment, a written summary of the reading assignment, and a brief presentation of the material during a weekly meeting for all involved residents. The assignments were selected by and the presentations proctored by one of the assistant residency program directors. Assignments were individually based on the participant's weakest of the 20 content areas of the in-training exam. At the conclusion of the meeting, a multiple-choice test was administered to each resident according to his or her subject matter. The participants obtained weekly feedback on their presentations and on their test results. Medicine (RRC-EM) along with the ACGME require a minimum of 5 hours per week didactic education for EM residents with a 70% average attendance rate required for graduation. The efficacy of lecture as the sole educational modality for resident didactics is questionable. We sought to overcome these limitations with the creation of an asynchronous curriculum (self-directed learning that occurs outside a specified time and place) to complement the existing didactic curriculum. Objectives: The objectives of this study are to demonstrate the feasibility of implementing a longitudinal asynchronous curriculum, measure its effect on intern conference participation, and measure annual EM in-training exam results before and after implementation of the curriculum. The study design included a retrospective before and after study of conference participation and in-training scores plus a single survey comparing perceptions of the new curriculum to the old model. This curriculum was developed using the six-step approach described by Kern et al. This replaced one hour of weekly didactic conference. Residents chose from a list of online modules which were identified in advance by our group and included a quiz or assessment at the end. Statistical analysis was achieved with the utilization of a two sided t-test to compare conference participation before and after the implementation. Conclusion: Although CT has a much higher diagnostic yield for TICI than CXR, its yield for clinically major injury is low, especially when the preceding CXR is normal. Considering the costs and risks of CT, the development of decision guidelines for its use in blunt trauma is warranted. Objectives: We sought to determine which variables (patient age, mechanism of injury, provider level of training, provider self-reported motivation) contribute to the decision of emergency medicine (EM) residents and faculty to image patients who meet all NEXUS low-risk criteria after blunt trauma. Methods: This is a prospective observational study of patients with blunt trauma and risk for c-spine injury who did not meet "trauma team activation" criteria. The study site is a Level I community trauma center with an annual ED census of 75,000. Providers completed a survey on a convenience sample of patients regarding whether the patient met NEXUS criteria for c-spine clearance (absence of the following: midline tenderness, distracting injury, intoxication, neurologic deficit, or altered mental status). Researchers then retrospectively queried the electronic medical record for patient age, mechanism of injury, and results of diagnostic imaging. Study data were analyzed with chi-square and descriptive statistics. Results: Three hundred patients were enrolled. The mean age of patients was 71 years (SD AE 22 years). 169 patients received c-spine imaging, of whom 53 were NEXUS-negative. There was no difference in imaging of NEXUS-negative patients as a factor of medical provider level of training (p=0.42). Of NEXUS-negative patients receiving imaging, 51 (96%) were over age 65, and 52 were being evaluated for a fall on level ground. Imaging revealed seven positive findings: 2 Type III Dens fractures (fx), 1 Type II Dens fx, 1 C6/C7 facet fx, a C4 lamina fx, a C5 lamina fx, and an occipital fx not visualized on head CT. Two of these injuries were in NEXUS-negative patients. Conclusion: Regardless of level of training, providers in our ED do not consistently apply NEXUS low-risk criteria to the geriatric population presenting after falls. In our cohort, the digression from applying NEXUS in this population led to the diagnosis of two c-spine injuries which would otherwise have been missed. Conclusion: Implementation of a triage system is associated with increases in pain scoring but remains unsatisfactory in improving timeliness of analgesic administration. A triage system combined with rapid pain management may overcome this discrepancy. Limitations of the study are the observational and single center design. Objectives: Our primary goal is to determine the proportion of pediatric patients who are suitable for reverse triage. Secondary objectives include determination of demographics, hospital service, and prior admission history covariates that correlate with reverse discharge. Methods: Using the same methodology as we did with admitted adults, we will review the charts of 600 pediatric patients from general pediatric and surgery floors, stratified by age and by seasonality. This study is powered to detect a 20% difference in critical interventions (CIs) by service with a=0.05 and b=0.8. We will use a pediatrician-reviewed list of CIs as a surrogate for the need to remain hospitalized. The absence of a CI over 4 days indicates eligibility for reverse triage. We will collect data on received CIs from day 0 through 8 to calculate what proportion of pediatric patients does not receive CIs each day. We will estimate the proportion of pediatric patients and thus the proportion of pediatric medical and surgical beds that could be created through early discharge of those eligible. Statistical analysis will include descriptive analysis and multivariable logistic regression. Background: "Physician at triage" and telemedicine have been used successfully (but separately) in the ED. We describe a pilot study of a combination of these approaches, physician telemedical triage (PTMT). Objectives: We hypothesized that PTMT would improve both length of stay (LOS) and time to physician evaluation (TPE) vs. nursing triage alone, and that patient acceptance of PTMT on a three-question survey (overall satisfaction, ease of discussion, and physician understanding of patient needs) would be high. Methods: DESIGN--Retrospective observational cohort study. SETTING--Academic tertiary referral center. PARTICIPANTS/ SUBJECTS--Intervention: patients triaged when PTMT was operative underwent PTMT at the discretion of the triage nurse. Control: all patients presenting during the identical time periods one week before and one week after these periods. INTERVENTIONS/OBSERVATIONS--Intervention: PTMT consisted of an interview via videoconferencing software on a tablet computer followed (when appropriate) by auscultation via electronic stethoscope. Control: nursing triage alone. LOS and TPE are reported in minutes as (mean +/-SD; 95% CI), and statistical comparisons are via two-sample t-test (two-tailed). Patient satisfaction data are reported as average scores on a five-point (low to high) scale as mean +/-SD. Results: PTMT was operative during 24.5 hours over 11 days from April to June, 2012. 106 patients were registered during the intervention times, of whom 36 (32.1%) underwent PTMT. 196 patients were registered during control periods. In the primary analysis (all 106 patients), the intervention (I) did not improve LOS vs. control (C) (I--266 +/-101; 244-288 vs. C--258 +/-172; 234-282), but there was a trend towards improved TPE (I--35 +/-28; 29-41 vs. C--42 +/-31; 38-46) (p = .051). In a secondary analysis (36 patients who underwent PTMT), intervention did not improve LOS vs. control (I--273 +/-125; 231-316 vs. C--258 +/-172; 234-282), but was associated with improved TPE (I--16 +/15; 11-21 vs. C--42 +/-31; 38-46) (p < 0.0001). Scores on the three-question patient satisfaction survey were 4.73 +/-0.72, 4.70 +/-0.73, and 4.66 +/-0.70. In a small pilot, PTMT did not improve length of stay, but was associated with improvements in time to physician evaluation. Patient satisfaction with this intervention was high. Differences in Noninvasive Thermometers in the Adult Emergency Department Joshua Zwart, Sean Toussaint, Acquisto M. Nicole, and Ryan P. Bodkin University of Rochester, Rochester, NY Background: Detection of an accurate temperature in the emergency department (ED) is integral for assessment, treatment, and disposition. Invasive temperature monitoring is not feasible as a triage vital sign and noninvasive monitoring with an oral or temporal artery (TA) temperature device is commonly utilized. Objectives: The primary objective of this study was to compare temperature readings from noninvasive temperature devices in the ED. The secondary objective was to determine if there is a larger discrepancy between noninvasive temperature recordings in febrile patients. Methods: A convenience sample of adult patients presenting to triage at a large tertiary care ED between April and May 2012 was evaluated. All patients were included if they required a temperature based on standard care. Data collection included demographic information and both an oral and TA temperature recording taken consecutively. Oral and TA temperatures were collected with General Electric ProCare 400 Vital Signs monitor and Exergen Infrared Temporal Scanner TAT-5000, respectively. Fifty patients were needed to detect a difference of 0.5 o C AE 0.5 for a power of 80%. Objectives were evaluated using the paired Student's t-test. Results: A total of 68 patients were identified during the study period. There were 37 males and 31 females with a mean age of 49.8 years (SD AE 19.3). Mean oral temperature was 36.97 º C (SD AE 1.11) and mean TA temperature was 36.37 º C (SD AE 0.68). The mean difference was 0.31 º C (SD AE 0.68), p = 0.0004. Overall, there were 47% of patients who had a difference in temperature recordings ! 0.5 º C. There were 16 patients who were febrile, determined by a reading > 38 º C on oral or TA thermometer. The mean temperature difference in those patients was 0.994 º C (SD AE 0.718) compared to a mean temperature difference of 0.094 º C (SD AE 0.511) in the afebrile patients, p = 0.0001. A total of 75% of fevers recorded by the oral thermometer were not recorded by the TA thermometer. Conclusion: There was a statistical difference in recorded temperatures between oral and TA thermometers and a clinically significant difference in 47% of patients. Febrile patients had a greater discrepancy between noninvasive temperature recordings compared to those who were afebrile. Caution should be taken when evaluating temperature recordings with these noninvasive temperature devices. The Methods: This was a prospective observational study that took place over 9 months at an urban teaching ED. The study population included adult patients requiring procedural sedation, as determined by the ED physician. The sedations were performed by EM residents with attending physician supervision, with sedative choice and dosing based on their judgment. Data were extracted from a form completed by the resident performing the sedation, and missing data were obtained from nursing flowsheets. Results were analyzed for procedure performed, type and dose of sedative used, rate and nature of adverse events requiring intervention, and frequency of ETCO2 change resulting in intervention. Results: One hundred thirty-seven patients underwent procedural sedation for orthopedic reduction (123), I&D (8), electrocardioversion (3), wound exploration (2), and LP (1) Methods: We performed a prospective study of ED patients with a complaint of pain. The VITA utilizes a hand-held plastic case containing a window with a pair of movable sliders. These sliders divide the VITA tool into areas of blue, red, and yellow color, reflecting "unaware of pain," "aware of nothing but pain," and "rest of the time." At ED presentation and discharge, patients were instructed to move the sliders to divide the timeframe into colors that reflected the proportion of time spent in each state during a specific time period. From this, we generated a single numeric score. Intake and discharge NRS scores were recorded for comparison. ED staff were blinded to VITA and NRS scores which we collected, and these scores were not used in the clinical assessment and treatment of pain. procedures performed in the emergency department. Recent interest in the use of dynamic US performance of cricothyroidotomy (cric) has sparked a debate regarding its applicability in a crash airway situation when nerves are highest. It has been suggested that US guided marking of the cricothyroid membrane (CTM) as a pre-intubation procedure for static performance may be better than the dynamic method because it removes the ultrasound from the crash procedure. To our knowledge, no prior study has evaluated the feasibility of using US to premark the CTM prior to attempted intubation. Objectives: To determine the feasibility and reliability of US guided marking of the CTM prior to attempted intubation so that this marking may be utilized as the location for the initial incision after failed intubation. Methods: Twenty-three resident and attending physicians at the University of Utah participated in the study as both operators and models. Prior to simulated intubation, the operator used a linear highfrequency probe in the axial and sagittal planes to identify and mark the CTM with an invisible pen. Failed intubation was simulated by cricoid pressure, flexion, extension, and rotation of the model's neck. Following this simulation maneuver, the same operator then identified the CTM with US, just as before, and marked the location with black pen. The difference in the pre-and post-intervention markings was measured in mm. The length of the CTM was also measured as a reference. Results: Twenty-three models and operators were utilized for data collection. The average CTM sagittal length was 13.9 mm AE0.5 (95% CI). The average sagittal and axial differences pre-and post-simulated intubation were found to be 0.91 mm AE0.56 (95% CI) and 1.04 mm AE0.66 (95% CI), respectively. As an axial incision has been described in the bougie-assisted cric method, the sagittal variability should be most important. Based on this data, the sagittal variance is 1/15 the total length of the CTM. Objectives: This study aims to delineate specific learning curves for infant VL and intubation in novice house officers (HO). Methods: IRB exemption was obtained. Volunteer airway novices were housestaff (HO) from anesthesiology (3), pediatrics (6), and emergency medicine (6). HO were timed performing repeated tasks of airway management on a Laerdal infant airway mannequin using pediatric Glidescope VL with size 0 blade and styletted 3.0 uncuffed endotracheal tube (ETT). Participants performed six successive laryngoscopies obtaining adequate glottic view. They then maintained stable glottic view while passing the ETT six successive times into the trachea. Finally, three combined laryngoscopy and intubations were performed, with attention to timing of each component. Results: Laryngoscopy and intubation learning curves showed initial rapid improvement and then flattened (figure). Mean time of first laryngoscopy was 5.5 seconds, second was 3.7s, and third was 2.6sthis reached statistical significance by Student's t-test (p<0.05). The learning curve of isolated intubation flattened after the second attempt: mean time of first intubation was 14.9s and second attempt was 7.1s (p<0.05). In combined laryngoscopy and intubation, mean total intubation times were 9.9s, 12.6s, and 7.8s and did not definitively illustrate further improvement. Objectives: To determine the incidence of and type of adverse events based on the age: pediatric ( 21 years old), nongeriatric adult (22 to 64 years), and elderly ( 65 years), with elderly subdivided into "young" (65 -79 years) and "older" geriatric ( ! 80 years). Methods: Prosepective data collection on hospital-wide form for all patients undergoing procedural sedation in the ED over a 10 year period. Statistical analysis used SPSS. Results: 2460 procedural sedations, ages from 2 weeks -102 years, 55% male, 857 pediatric (34.8%), 940 nongeriatric adults (38.2%), 633 geriatric adults (25.7%) with 476 (75.1%) younger (19.3%) and 187 (24.9%) older geriatric patients. Most common adverse events were hypotension, hypoxia (oxygen saturation < 90%), bradypnea/apnea, dysrhythmia, hypertension, and allergic reaction. Incidence of adverse events was pediatric 4.6%, nongeriatric adults 18.2%, geriatric adults 24.0%, (p< 0.01). Incidence of adverse events in the younger geriatric patients was 21.6% and older geriatric patients 31%, (p < 0.01). Older patients tended to have higher American Society of Anesthesia (ASA) classes but age group differences remained even when other variables (ASA class, sedative used, procedure done, etc.) were factored out. Conclusion: Significant differences in adverse events exist based on age group. Pediatric patients have the lowest incidence, adult nongeriatric patients an intermediate incidence, and geriatric patients the greatest incidence of adverse events. The "elderly" geriatric patients compared to "younger" geriatric patients and nongeriatric adults and children/infants have an even greater risk of adverse events during procedural sedation. This is true irrespective of other factors including sedative used, and ASA class. ED physicians need to be aware of the greater possibility of side effects and complications in older patients who are undergoing procedural sedation. Comparison Objectives: The aim of this study was to assess the accuracy and timeliness of using tracheal ultrasound for examination of endotracheal tube placement in cardiac arrest patients. Methods: This was a prospective, observational study, conducted at the emergency department of a university teaching hospital. Patients underwent emergency intubation due to cardiac arrest. Airway ultrasonography was performed during emergency intubation with the transducer placed transversely at the trachea over the suprasternal notch. Quantitative waveform capnography was used as the criterion standard for confirmation of tracheal intubation. The main outcome was the timeliness between airway ultrasonography and capnography. Results: A total of 16 patients and 19 intubations were included in the analysis. The endotracheal tube was placed in the trachea in 16 intubations and in the esophagus in three intubations. The overall sensitivity and specificity of ultrasound for confirmation of tracheal intubation were 100%, respectively. The capnography application time after intubation was 17.5 (10.0~32.5) seconds. The capnography confirmation time after application was 30 (10~120) seconds. The ultrasound confirmation time for endotracheal tube placement after application was 5 (4~5) seconds. Conclusion: When patients were in a low pulmonary blood flow state, such as cardiac arrest, capnography confirmation of endotracheal tube placement was not rapid and needed a lot of time. Ultrasound confirmation was very rapid and accurate, and was not affected by pulmonary blood flow. Ultrasound confirmation of endotracheal tube placement is more useful in the emergency department. Checklist Improves Safety Documentation in Emergency Department Sedations R. Jason Thurman, Suzanne Bryce, and Lara Phillips Vanderbilt University School of Medicine, Nashville, TN Background: Appropriate assessment and documentation of critical safety parameters and informed consent is essential to the safe and effective administration of conscious sedation in the emergency department setting. We hypothesized that creating and implementing a comprehensive electronic pre-sedation checklist would enhance the preprocedural assessment of patients as well as improve the documentation of safety parameters and informed consent in the medical record. Objectives: We sought to create an interactive electronic checklist to help prompt physicians to perform critical safety checks prior to engaging in conscious sedations as well as to improve the documentation of safety parameters and informed consent for sedation in the medical record. We performed a retrospective analysis of 283 consecutive medical records of patients who had undergone conscious sedation in our emergency department. We assessed for the presence or absence of the documentation of several important safety parameters such as last oral intake, performance of a pre-sedation time out, ASA status, presedation score, presence of rescue equipment, and documentation of informed consent. We created and implemented our electronic form after retrospective analysis was complete, then prospectively studied the effect of the use of the form on documentation of selected safety measures and informed consent. Results: Of the 283 retrospective patient medical records reviewed in the study, 59% (n=167) had documentation of last p.o. intake prior to sedation, while 63% (n=178) had documentation of informed consent and 80% (n=226) had documentation of a pre-sedation time out. Following the implementation of the use of our pre-sedation checklist, we prospectively studied the medical records of 119 patients who underwent conscious sedation in the emergency department to assess the effect of the use of the checklist. In this group of patients, 97% (n=112) had documentation of last p.o. intake prior to sedation, while 94% (n=115) had documentation of informed consent and 99% (n=118) had documentation of a pre-sedation time out. The implementation of an interactive electronic presedation checklist was associated with dramatic improvement in the assessment and documentation of safety parameters and informed consent for conscious sedation in the emergency department setting. Objectives: This study was designed to determine how many cricothyrotomies residents have performed on living patients, the breadth and prevalence of alternative methods of instruction, and residents' degree of comfort with performing the procedure unassisted. Methods: EM residents nearing graduation were surveyed using the web. Data regarding the number of cricothyrotomies done on living and recently deceased patients, animals, and models/simulators were gathered. Residents indicating experience with the procedure were asked additional questions as to the indication, supervision, and outcome of their most recent cricothyrotomy. Data were also collected regarding experience with rescue airway devices, observation of cricothyrotomy, and comfort ("0-10" scale with "10" representing complete confidence) regarding the procedure. Results: Of 296 residents surveyed, 22.0% performed a cricothyrotomy on a living patient, and 51.6% had witnessed at least one performed. Those who completed a single cricothyrotomy reported a significantly greater level of confidence (6.3, 95% CI 5.7-7.0) than those who did none (4.4, 95% CI 4.1-4.7), p<<0.001. Most respondents (68.1%) had utilized the recently deceased to practice the technique, and those who had done so more than once reported higher confidence (5.5, 95% 5. 1-5.9 ), than those who had never done so (4.1, 95% CI 3.7-4.5), p<<0.001. Residents who practiced cricothyrotomy on both simulators and the recently deceased expressed more confidence (5.4, 95% CI 5.0-5.8) than those who utilized only simulators (4.0, 95% CI 3.6-4.5), p<<0.001. Neither utilization of models, simulators, or animals, nor observance of others' performance of the procedure, independently affected reported confidence among residents. Conclusion: While prevalence of cricothyrotomy and reported comfort with the procedure remain low, performing the procedure on living or deceased patients increased residents' confidence in undertaking an unassisted cricothyrotomy upon graduation in the population surveyed. There is evidence to show that multiple methods of instruction may yield the highest benefit, but further study is needed. Objectives: To perform a theoretic analysis of the effect of obesity on expected hemodynamic changes in mean arterial pressure (MAP) and cardiac output (CO) during hemorrhagic shock, using a derivation of the Guyton model of cardiovascular physiology. Methods: Computer simulation studies were used to predict the relative effect of increasing body mass index (BMI) on global hemodynamic parameters during hemorrhagic shock. The analytic procedure involved recreating the physiologic conditions associated with changing BMI for a virtual subject in an in silico environment. The model was first validated for the known effect of a BMI of 30 on iliofemoral venous pressures. Then the relative effect of changing BMI on the outcome of target cardiovascular parameters was examined during a simulated re-enactment of the acute loss of blood volume in class II hemorrhage. The percent changes in these parameters were also compared between the virtual nonobese and obese subjects. The model parameter values are derived from known population distributions which produce simulation outputs that can be used in a deductive systems analysis assessment rather than traditional frequentist statistical methodologies. In the hemorrhage simulation studies, moderate increases in BMI were found to produce significantly greater decreases in MAP and CO as compared to the normal subject. During hemorrhagic shock, the virtual obese subject was found to have 42% and 44% greater falls in CO and MAP, respectively, when compared to a nonobese subject. A systems analysis of the model revealed that an increase in resistances to venous return due to changes in intrabdominal pressure resulting from obesity was the critical mechanism responsible for the observed hemodynamic differences. Conclusion: This study suggests that obese patients in hemorrhagic shock may have a higher risk of hemodynamic instability as compared to their nonobese counterparts. The influence of obesity on cardiac and vascular compliance and the importance of obesity-induced increases in intraabdominal pressure on reducing venous return appear to be the dominant mechanisms responsible for these observed differences. Background: Sphingosine-1-phosphate (S1P) is a bioactive sphingolipid present in plasma which potently regulates endothelial responses through the interaction with its receptors (S1PR). We have previously shown that while S1PR1 inhibits vascular permeability, S1PR2 promotes vascular leakage in the lung and retina through a Rho-Rho kinase (ROCK)-dependent mechanism. Objectives: Since vascular permeability is an early component of the inflammatory response, we aimed to study the role of S1PR2 in vascular inflammation during endotoxemia. Methods: Endotoxemia was induced in wild type (S1pr2+/+) and S1pr2 null (S1pr2-/-) mice by intraperitoneal injection of lipopolysaccharides (LPS). Plasma cytokine levels were determined by ELISA. Vascular permeability was assessed by Evans Blue Dye assay. Expression of adhesion molecules, procoagulant and proinflammatory markers were determined by reverse transcription-quantitative PCR analysis and by immunohistochemistry. Bone marrow chimeras were generated by irradiation of S1pr2+/+ and S1pr2-/-mice and intravenous injection of bone marrow cells from S1pr2-/-or S1pr2+/+ mice, respectively (S1pr2-/-to S1pr2+/+ and S1pr2+/+ to S1pr2-/-chimeras). S1pr2+/+ to S1pr2+/+ chimeric mice were used as controls. Results: Our results revealed that both S1pr2+/+ and S1pr2-/-mice developed systemic inflammation upon LPS injection. However, cytokine levels fell more rapidly in S1pr2-/-mice compared to wild type. In addition, S1pr2-/-mice exhibited less vascular permeability and lower levels of adhesion molecules, pro-coagulant (Tissue Factor) and inflammatory markers (Monocyte Chemotactic Protein-1) in the lung, liver, and kidney (40-80% inhibition). Similarly, pharmacological inhibition of S1PR2 signaling by JTE013 resulted in faster resolution of systemic inflammation, less vascular permeability and less endothelial inflammation during endotoxemia (32-86% inhibition). Experiments with bone marrow chimeras indicate the critical role of S1PR2 in the vascular/stromal compartment, in the regulation of vascular permeability and vascular inflammation. Conclusion: Our data using pharmacological and genetic approaches indicate that S1PR2 is a key regulator of the proinflammatory phenotype of the endothelium during endotoxemia and identify S1PR2 as a novel therapeutic target for vascular disorders Objectives: To determine whether patient demographic characteristics (race, age, and sex) are associated with the peak troponin (cTnI) level recorded during an NSTEMI event. Methods: The study population included all patients who presented to the ED at Washington Hospital Center from 11/15/2009 to 12/31/ 2011. Medical charts were extracted for patients with an ICD-9 code of 410.71 (NSTEMI). These patients presented to the ED and were subsequently admitted to inpatient floors. Key variables (age, sex, race, and cTnI levels) were extracted from patient records. Univariable and multivariable linear regression were performed. Exploratory data analyses were conducted to look for an association between troponin levels and age, race, and sex. Peak troponin levels were not normally distributed and were log transformed for regression analyses to satisfy model assumptions of normality. A Shapiro-Wilk test was performed to test the normality assumption for the transformed troponin levels. Univariable and multivariable linear regression analyses were performed to measure the association between peak cTnI and patient characteristics. All analyses were conducted in Stata version 12. Results: 460 patients matched our search criteria of an ICD-9 code 410.71. Of these, five patients were excluded from further analysis. Thirteen patients had multiple admissions to the ED. Peak troponin levels ranged from 0.012 to 800 ng/ul. Univariable and multivariable regression analysis for geometric mean of peak ln(cTnI) was completed for the various demographic characteristics. Sex and age showed nonsignificant differences between peak troponins. Race showed a significant difference between peak troponins in both the univariable and multivariable analysis. Caucasians had 1.9 times (95% CI: 1.24 -2.96) higher geometric mean peak cTnI level than African-Americans. Conclusion: Race was the only characteristic with statistically significant differences in peak cTnI level with African-Americans having a lower peak than other races. This information could be important for future research because diagnostically treated levels of cTnI may need to modified to include differences between race. Objectives: To determine the ability of the TnI to detect and the NACPR to exclude patients with confirmed ACS in a low-risk chest pain population. Methods: A retrospective cohort study of patients admitted to an ED Observation Unit at two urban, teaching hospitals over 21 consecutive months (1/11 -9/12). Low-risk patients were defined by Reilly/Goldman criteria, underwent clinical re-evaluation, serial TnI / ECG at 0, 3, and 6 hrs, then stress imaging. Patients with positive testing were admitted. ACS was defined as >70% coronary stenosis, revascularization, or death. The TnI assay used was the Siemans Centuar Ultra (dectection limit of 0.006 ng/ml). NACPR criteria were applied to patients retrospectively. Data were abstracted from an EDOU database and electronic records. Background: The arrival of the new oral anticoagulant dabigatran has been confronting emergency physicians with a big challenge when facing the bleeding patient. Dabigatran, as opposed to warfarin, has neither an antidote nor a way to measure the degree of its effect and hence very limited options to control the bleed. Objectives: To compare the overall mortality after a first visit in the emergency department (ED) for a bleeding complication of patients on dabigatran, warfarin, or aspirin. We conducted a post-hoc analysis on a database of all patients who presented to a tertiary care ED with any kind of bleeding or suspicion of a bleed between March 2011 and August 2012 who were taking dabigatran, warfarin, or aspirin. The primary endpoint was longterm survival. We performed a Cox proportional hazard model, controlled for age, to calculate the hazard ratio (HR) for dabigatran, and aspirin, using warfarin as baseline value. Statistical significance was set to alpha = 0.05 and results are presented with 95% confidence intervals (CI 95%). Results: There were 934 patients responding to inclusion criteria, with a mean follow-up period of 1 year. There were 108 deaths (11.5%) recorded within follow-up period. The mean age was 74.3 years with no statistical significance between the three groups. The risk of dying for patients on dabigatran was significantly higher than for those on warfarin (HR = 2.1, CI 95% 1.0-4.5, p=0.05) after controlling for age. Aspirin had a lower mortality rate compared to warfarin, but this was not statistically significant (HR=0.75, CI 95% 0.50-1.14, p=0.18). Conclusion: This was a retrospective study conducted in only one hospital and only 32 patients were taking dabigatran. Due to this small number it was impossible to control for comorbidities as confounding variables. Despite its limitations, this study showed an increase in overall mortality in patients with bleeding complications on dabigatran compared with warfarin or aspirin. ED physicians must be aware of the potential lethal outcome to this high-risk group of patients. Objectives: To explore differences in AHF pharmacologic management between emergency physicians (EP), hospitalists (HOSP), and cardiologists (CARD) to better understand use and rationale of NV. We hypothesized NV use would differ significantly by specialty. Objectives: To analyze the effect of novel process changes implemented to improve door to ECG times for STEMI patients who walk into the ED of a large, urban, public teaching hospital. These changes included creating a "Cardiac Triage" designation that is assigned upon arrival based on chief complaint. The "Cardiac Triage" designation prioritizes the patients in an electronic patient tracking system. In addition, the ECG technician and machine were moved to the area where medical screening exams are conducted in order to further streamline the process. The changes were implemented during April 2011 through June 2011. The time from door to ECG for STEMI patients who walked into the ED was compared between the year before (April 2010 through March 2011) and the year after (July 2011 through June 2012) the implementation of the new process changes. The mean and median door to ECG times were compared using non-parametric statistics. The time period during which the process changes were piloted was not included in the analysis. receive a 12-24 hour "rule out" followed by some form of provocative testing. Recently, three randomized trials found that a coronary CTA based strategy is more efficient, but in these trials stress testing was usually deferred to the next day. If stress testing was performed within the same time frame as CTA, the two groups might be more similar. Objectives: We tested the hypothesis that stress testing can safely be performed within several hours of presentation. We performed a retrospective cohort study adhering to criteria defined by Gilbert and Lowenstein for chart abstraction. Data points were defined in accordance with ACC/AHA key definitions. All patients who presented with potential ACS and were placed in a clinical pathway that performed stress testing after troponin values 2 hours apart were included. We excluded patients with STEMIs or elevated initial troponins. We collected demographic data, medical and cardiac history, labs, ECG results and timing of tests using a structured data collection instrument with excellent inter-rater reliability. Background: Thoracic aortic dissection (TAD) is an uncommon, deadly disease with a high rate of misdiagnosis. Current diagnostic strategies involve advanced imaging studies with high costs and risks of procedural complications, contrast nephropathy, and radiation exposure. Of these advanced imaging studies, CT angiography (CTA) is performed a majority of the time. With two recent meta-analyses indicating a high sensitivity, incorporating D-dimer testing into a diagnostic approach for TAD seems to be a promising approach to reduce the need for advanced imaging. However, a testing threshold (TT), or pretest probability below which a D-dimer diagnostic approach would be the better alternative to proceeding directly to ordering a CTA, has not been established. Objectives: To determine through a decision analytic model the TT for choosing a diagnostic pathway incorporating the use of a D-dimer assay in patients suspected of TAD in scenarios where CTA is used as the primary imaging modality. In addition, we aimed to determine through sensitivity analysis which model inputs have the largest effect in determining the TT. Methods: A model was developed using decision analytic software (TreeAge Pro 2012) to determine the TT (figure). Model inputs were obtained through literature review and clinician assumption when data was unavailable. One-and two-way sensitivity analysis were performed to determine the base case TT and drivers of the model. Conclusion: Our decision analytic model found a TT of 0.4% with six major drivers. The TT is significantly lower than the ability of current clinical decision rules (CDR) to reduce pretest probability before testing for TAD. Further study is warranted to develop better CDRs and to assess the effect of cost on the TT. Objectives: We hypothesized that a video podcast can improve students' knowledge and confidence when responding to a potentially violent person in the ED. Methods: Fifty-three fourth-year medical students on their emergency medicine clerkship were given a pre-test composed of eight objective questions and five subjective survey items regarding VPM. Throughout the 4-week clerkship, students had unlimited access to a 10minute VPM video podcast. On the final day of the clerkship, students were administered an identical eight-item objective knowledge-based post-test with added survey items. Objectives: We sought to determine whether an email-based system for procuring medical student evaluations would garner a higher response rate than a hard copy end-of-shift evaluation form. Methods: Retrospective, observational. In 2010 student evaluations were performed on a shift-by-shift basis using an end-of-shift evaluation form. In 2011-12, an automated system was developed which emailed faculty a reminder to complete an on-line evaluation from a link in our electronic medical record (EMR). In 2012-2013, the system was modified to allow evaluations to be completed by replying to an email generated through the EMR. The first six blocks of each year were compared. The shift requirements of the students for each rotation block did not change over the 3 years. Results: The number of evaluations per student that were obtained over the course of the clerkship for the first six blocks of each academic year were compared using a two-sample rank sum (Mann-Whitney) test (p<0.05). Additionally, the student with the lowest number of evaluations for each block was identified. That minimum number was compared across blocks and years. Conclusion: An automated, email-based system for procuring medical student evaluations from faculty in an EM clerkship allowed capture of a greater number of student-preceptor interactions than handwritten shift evaluation cards. Moreover, the minimum number of evaluations was greater using the email-based system. The clinical us of an EMR facilitates the implementation and use of such a system. Background: New ACGME program requirements include ensuring effective, structured hand-over processes. Implementation of a standardized hand-off method in the emergency department has been shown to be possible, but has not been studied for efficacy. Objectives: This study, through observational data collection and an attitude instrument, investigated the effect of an educational initiative on data transmitted during sign-out, the number of post-sign-out unexpected events, and resident and faculty attitudes toward standardization. Methods: The residents and faculty were first surveyed on their attitudes toward standardization. Then two weeks of morning sign-outs were observed at each of the two participating institutions, collecting data on the transmission of 14 distinct data points based on SBAR methodology. After the initial observation period, an extensive didactic/ discussion style lecture was presented and standardization tools were made available for residents use. The two-week observations and surveys were then repeated. Results: 246 patient encounters (150 before intervention, 96 after) met inclusion criteria. There was no significant change in the mean number of unexpected patient events or mean number of data points transmitted for each patient before and after the intervention. Transmission went down on four items after the intervention, one increased, and no significant change was appreciated on nine others. Significantly, four major items were virtually identical. Residents and faculty had generally favorable views of standardization, but differed on their knowledge of institutions' written sign-out policies and their views on the seamlessness of the current process. The results indicate that although physicians may be open to standardization, education alone may be insufficient to standardize the process or make significant improvements to patient care. Objectives: We attempt to show that in a group of novice medical students who have no experience with lumbar punctures, UGLP can be achieved more reliably and with a lesser degree of difficulty than traditional landmark-guided lumbar puncture (LGLP). We preformed a randomized crossover study of 61 firstand second-year medical students. All students were given a standardized half-hour lecture on LGLP followed by a standardized halfhour lecture on UGLP. After training, participants were randomized to either LGLP on a standard lumbar puncture trainer or UGLP on a standard lumbar puncture trainer configured with a novel ultrasoundfriendly insert. They were then crossed over to the opposite group. Each participant's number of attempts and perceived level of difficulty were recorded for the two skills. They were brought back after 6 weeks to evaluate their retention of each skill, again recording number of attempts and perceived level of difficulty. Objectives: The goal of this project was two-fold: 1) to access ED patients' history of sexually transmitted disease and, 2) to assess baseline knowledge about HIV, chlamydia, syphilis, and gonorrhea among incoming ED patients. Methods: Cross-sectional data were collected from a convenience sample of incoming patients in an urban ED with chief complaints of potentially STI-related illness from January-April 2012. These patients completed a 48-item anonymous survey during their visits in the ED. infections, only 20 (5%) respondents correctly named all infections that are sexually or not sexually transmitted. Respondents more often described the prognoses of chlamydia, gonorrhea, and HIV correctly than incorrectly (all p<0.0001). However, respondents more often described the prognosis of syphilis incorrectly than correctly (p<0.0001). Respondents were 2.33 times more likely to self-report having an above-average knowledge on HIV than gonorrhea, syphilis, and chlamydia combined (p<0.0001). Respondents were 68% times as likely to self-report having a below-average knowledge on syphilis than HIV, gonorrhea, and chlamydia combined (p<0.0001). There was no significant difference of knowledge on gonorrhea or chlamydia than the other three STIs combined (p=0.119 and p=0.84). Objectives: Our study investigates which components of the application were most predictive of securing a rank list spot. Methods: This was a retrospective analysis of EM residency applicants over a 4-year period at a community-based universityaffiliated emergency medicine program comprised of eight allopathic positions per year. Over 600 candidates apply annually, with 100 invited to interview. Interviews were open-file, conducted by three or four academic faculty members and one chief resident, and scored on a standardized numerical scale. Interviewers were not blinded to the applicants' academic records. Applicants' interview scores, USMLE scores as reported in their transcripts, and final rank score were used to calculate correlation coefficients amongst the variables. For applicants who took the COMLEX instead of the USMLE, a conversion factor of (score x 0.24) + 67.97 was used for our analyses. Linear correlation and regression analyses as well as descriptive statistics were used for analysis. This study was IRB exempt. Results: 396 interviews were conducted over 4 years. Mean interview scores were similar among the 4 years (ANOVA, p=0.29). Mean USMLE scores were also similar (mean 212, standard deviation 17.3, p=0.36). The correlation coefficient for interview score and rank position was -0.87 with an r-squared of 0.75. The correlation coefficient between USMLE Step 1 score and rank position was -0.32 with an r-squared of 0.10. The correlation coefficient for interview score and USMLE Step 1 score was 0.34, with an r-squared of 0.12. Conclusion: USMLE Step 1 score accounts for only 12% of the variance in interview scores and 10% of the variance in rank position. The interview is the most important determinant of rank position on the EM match list. Future studies will determine if the measures that predict a successful applicant to the match also predict success as a resident. Methods: Adult patients were recruited from the ED waiting room and included in the study if they were English-speaking, had basic reading skills, and in no apparent distress. Further categorization into control or intervention groups was based on the day of presentation. All participants were initially asked the generic name for Tylenol, after which they were told the answer. The intervention group then received a 5-minute teaching session guided by a visual aid prior to a written and interactive exam. Control groups did not receive teaching prior to exams. Written exam asked about target organ in APAP toxicity and asked patients to circle APAP-containing products from a list of product names. Interactive exam required subjects to physically sort bottles containing APAP from an array of OTC and prescription bottles. Total possible score from both the written and interactive exam was 20 points (10 points each). Results: Preliminary data from 100 participants (target sample size 200) were demographically similar in age (average 42 years) and race (predominantly African American). Average level of education was high school. Prior to any intervention, 12% of all participants knew the generic name. Intervention group scored 78% overall and 68% on the written portion. Control group scored 56% overall and 33% on the written exam. Both groups performed similarly in sorting of bottles (control group: 79%; intervention group: 88%). While only 31% in the control group identified the liver as the affected organ in toxicity, 78% of intervention group answered correctly. The teaching session utilized improved patient ability to identify APAP-containing products. Interestingly, both groups may have performed well in the interactive portion of the exam merely from having been told the generic name initially and being able to physically look for the name on the bottles. Even very brief teaching opportunities may be beneficial. A future intervention may be including the visual aid employed in ED discharge instructions and investigating retention of this vital information. Objectives: The purpose of the study is to determine the change in written comments on faculty summative assessments of residents following the implementation of a faculty development program involving a session on providing feedback, a financial incentive, and a daily feedback card program. The study occurred at a single academic institution from 4/2011 to 7/2012 and included all faculty and residents at the institution. A faculty development training on feedback occurred at the beginning of the study followed by implementation of a financial incentive for summative evaluations by faculty then implementation of a daily feedback card program. Faculty received copies of their completed daily feedback cards prior to filling out summative assessments of residents. A qualitative and quantitative assessment of written comments from summative evaluations during three separate two month periods of time was performed: Period 1) the beginning of the study period; Period 2) after faculty development training and implementation of the financial incentive but before implementation of the daily feedback card program; Period 3) after all aspects of the program were in place. Results: The total number of summative assessments with written comments from Periods 1, 2, and 3 were 81, 100, and 195 respectively. Figure 1 shows the total numbers of core competency specific and constructive comments made on summative evaluations. Conclusion: Faculty development activities can improve the total number of resident evaluations and those evaluations with core competency specific and constructive comments. Isolated faculty development sessions and financial incentives alone may not improve the quality of comments in summative evaluations without ongoing reinforcement. The use of daily feedback cards specific to the ACGME core competencies may be one way to provide this regular reinforcement to faculty filling out summative evaluations on residents. Objectives: We hypothesized medical students teaching one another simulation would be effective for learning. year medical students. Each group of students rotating through the ED is required to attend core lectures that teach basic EM concepts. We developed 3 clinical scenarios that are high-yield for Emergency Medicine and could be taught using simulation. These cases included management of a basic disease process and stabilization of a lifethreatening cardiac arrhythmia. We identified learning goals, outcome checklists, and a list of resources for each topic. We performed a pilot study of student-led simulations with one faculty member observing the groups and ensuring all material was presented. The sessions were assessed by surveying student satisfaction and subjective learning. Results: Forty students participated in the student-led simulation and all completed the survey. Evaluation results are listed in the attached table. Conclusion: Student attitudes toward self-directed learning in a simulation environment are very positive. Student-directed simulation is seen as an enjoyable method of learning, and does not appear to be extremely laborious or time intensive. This type of instruction could be efficiently utilized during clerkships to enhance education and promote more self-directed learning. Additionally, student-directed learning will decrease faculty burden. Background: As part of practice based learning and improvement competency, residents are expected to identify areas of weakness and work to improve their knowledge in those areas. However, little is known about the process by which learners assess their learning needs (perceived weaknesses and strengths) and then allot their studying time. This is termed self-regulated learning. Objectives: To investigate the impact of self-assessed diagnostic strengths and weaknesses of EM residents on their allocation of learning time Design: This is a multi-center study anchored on the Inservice Training Examination (ITE). Participants. We administered an instrument looking at self-assessment and time allotment to 98 EM residents. We excluded interns because they had not taken the ITE yet to guide them. The instrument was developed for a study in medical students but we focused on the 18 content domains reported by the ITE (Gruppen, 2000) . Residents determined their levels of confidence in each of the domains (i.e. cardiovascular) using a ten-point scale anchored by ''least confident'' (1) and ''most confident'' (10). Then they estimated time spent learning about the same domains using the 11-point response scale that was anchored by ''none at all'' (0) and ''a great deal'' (10). Comparisons were made between residents' areas of confidence and time allocated to studying each domain. Correlations were made between a resident's confidence and educational time spent using within subject analysis. Objectives: We attempt to show that in a group of novice medical students who have limited peripheral intravenous experience, ultrasound guided peripheral intravenous (UPIV) cannulation can be achieved more reliably and with a lesser degree of difficulty than standard peripheral intravenous (SPIV) cannulation. We preformed a randomized crossover study of 61 first and second year medical students. All students were given a standardized half-hour lecture on SPIV cannulation followed by a standardized half-hour lecture on UPIV cannulation. Each participant's number of attempts and perceived level of difficulty (on a ten-point Likert scale, where level 10 is the most difficult and 1 the easiest) were recorded for the two skills. They were brought back after six weeks to evaluate their retention of each skill, again recording number of attempts and perceived level of difficulty. Background: The need to develop formal curricula for medical student procedural skills has been emphasized by the AAMC. However, there have been few published research studies about the effectiveness of such curricula. The teaching and assessment of required procedural skills has traditionally been unstructured and episodic, with students encouraged to practice procedures as they present themselves. Objectives: To determine the effect of a new required M4 emergency medicine rotation on students' reported experiences with procedures. Methods: Six consecutive classes of senior medical students completed a survey reporting their experiences with the procedural skills curriculum during their four years of medical school. Data from 2007-2009 were used to design a required fourth-year course in EM that emphasized procedural skills. Data from three years after implementation of the course (2010-2012) were used to assess the effect of the EM clerkship on procedural experiences. Data were analyzed using paired Student's t-test to compare pre-and post-curriculum changes. Results: Thirty-four procedural skills were assessed by graduating students to determine if they had performed the procedure at least once during their medical education. A total of 317 students completed the survey (response rate of 80%) with 151 students participating pre-EM and 166 students participating post-EM clerkship. Overall, students reported an average improvement in procedural experiences of 9.7% after introduction of the required EM rotation. Of the 34 skills, six demonstrated decreased experience. However, the majority of procedures showed a statistically significant increase in student experience after the incorporation of a required EM rotation. Procedures showing a statistically significant increase in exposure (Hemoccult testing, suturing, intubation/observe or perform, chest tube/ observation, pulse oximetry, catheterization, urethral, FSBS, ABG, arthrocentesis, local anesthesia infiltration, MDI, lumbar puncture, injection, NG intubation, venipuncture, splinting, nebulizer treatment, abscess I&D, blood culture, IV, and EKG) were either taught in the EM clerkship procedure lab or performed under faculty supervision in the ED. Conclusion: Adding a mandatory EM rotation which emphasized procedural skills contributed to increased student opportunities to learn procedures. Objectives: To determine if a didactic course followed by hands-on training would improve knowledge of dental anatomy, procedures, and perceived ability to treat this population immediately following the course. The training module is set up to have didactic teaching first followed by hands-on training. The didactics include a slide show describing anatomy and the procedures as well as enlarged models to demonstrate. The hands on-training involves actual dental tools and anesthetic to be used by the participants on one another. Student comfort with treating dental emergencies, providing facial blocks, and learning dental anatomy was assessed with a seven-point Likert scale (1-not at all comfortable, 7-very comfortable). Perceived utility of the course was also determined by identifying if they enjoyed their experience, would do it again, and would recommend it to others. All interns in an ACGME-accredited EM residency program participated in the module as part of a core lecture series and then voluntarily completed the surveys. Data were analyzed using descriptive statistics. A rank-sum test was used to compare Likert scores between pre-test and post-test questionnaires. Objectives: To quantify the relationship between patient satisfaction and the introduction of medical students with the hypothesis that medical students will have a neutral or positive effect upon patient satisfaction. Methods: This is a retrospective observational study examining patient satisfaction scores before and after the introduction of medical students at a single clinical site in March 2011. Patient satisfaction surveys were administered during the study period by a third party vendor to a randomly selected group of patients. Questions analyzed were those used for the primary outcomes reported by our institution: "Would you recommend this ED to your friends and family?" and "How would you rate our facility overall?" The percentages of "positive responses" for the seven months before and after the introduction of medical students were tabulated and compared using chi-square analysis. Results: During the study period, 12 fourth-year medical students rotated at the clinical site. Students evaluated 890 patients out of a total volume of 9661 (9.2%). Patient surveys were returned by 223 patients in the pre-medical student cohort and by 463 patients in the post-medical student cohort. For the "would you recommend" question, there was an 84.2% positive response rate in the pre-student cohort and an 80.6% positive response rate in the post-student cohort (p=0.238). For the "overall rating" question, 60.3% of patients responded positively in the pre-student cohort versus 68.1% of patients in the post-student cohort (p=0.038). Conclusion: For our institution's primary patient satisfaction outcomes, the introduction of medical students did not have a significantly negative effect upon patient satisfaction scores, and was associated with a significant positive effect upon the overall rating of our facility by patients. The study is limited by analyzing only one clinical site, and further work is needed to validate our findings in other EDs. Objectives: We evaluated whether scores given to EM residents by EM faculty were correlated with their performance scores. Methods: This was a cross-sectional study conducted at an urban tertiary care academic medical center with an ACGME-accredited EM residency training program with a PGY 1-2-3 configuration. Evaluations over a single academic year (2011) were extracted from an existing evaluation system by an independent party and coded so resident and faculty names remained anonymous. Performance scores were recorded for residents in each of the PGY classes and compared to the faculty scores. Resident and faculty evaluations are "global ratings", and based on a nine-point Likert scale. Residents are aware of the faculty names and scores. However, evaluations of faculty are anonymous. Scores for each subsection on the evaluations were averaged for both the residents and the faculty to achieve an "overall" score for the analysis. The primary outcome was the correlation between the resident-faculty scores. Results: There were 42 residents and 26 faculty included in the analysis.There were 15 PGY-1 residents, 13 PGY-2 residents, and 14 PGY-3 residents. The faculty had an average of 14 practice-years of experience. The mean score given to residents in PGY-1 was 8.09 (95% CI 8.08-8.22), in PGY-2 was 8.55 , and in PGY-3 was 8.60 ). The mean scores given to faculty by PGY-1 was 8.07 (8.02-8.12 ), by PGY-2 was 8.03 (7.98-8.08), by PGY-3 was 8.04 (7. 99-8.09 ). The correlation between resident-faculty scores for all PGY classes was r=0.24 (p<0.001). The correlation between resident-faculty scores for each PGY class was r=0.29 (p<0.001) for PGY-1, r=0.39 (p<0.001) for PGY-2, and r=0.23 (p<0.001) for PGY-3. Despite increases in resident performance over the 3 PGY years faculty scores remained consistent. There was a low correlation between performance scores given to residents and scores given to faculty. Resident performance scores don't appear to influence their evaluation of faculty. Emergency Medicine (ABEM) certification exam (CE) residents take an annual in-training exam (IE) during training. All EM residencies strive for a 100% pass rate of the CE. However, each residency has its own strengths and weaknesses of training (e.g. strong educational sessions in toxicology or high clinical exposure to trauma). To improve IE scores, many residency programs provide review sessions. This serves to norm performance. One method of review is board-style questions with an audience response system (ARS), which has been shown to be an effective tool for knowledge acquisition and evaluation. Objectives: We compared scores on board review quizzes delivered by an ARS at multiple residency programs and hypothesized that mean scores would be similar across institutions. Methods: Prospective observational study of EM residents at six ACGME-accredited EM residency programs. Subjects participated in bimonthly review sessions using Rosh Review questions and an ARS. Each review session consisted of 10 multiple choice questions covering a major topic of the EM in-trianing exam: cardiology (CV), gastroenterology (GI), neurology (Neuro), toxicology (Tox), signs & symptoms (S&S), procedures (Pro), trauma (T), and respiratory (Resp). Subjects who did not complete all 10 questions for a given topic were excluded from the analysis. Descriptive statistics of mean scores by institution, standard deviation, and confidence intervals were reported. ANOVA was performed on mean scores by topic. Results: 162 residents participated. Results of ANOVA yielded a significant difference amongst institutions in topics of CV, GI, Neuro, and Pro (p<0.05). There was no significant difference between institutions in the remaining topics. The variability noted amongst institutions was small and mean scores fell within a 12% range for all topics except procedures (CV = 42-54%, GI = 61-73%, Neuro = 48-59%, Tox = 56-68%, S&S = 51-60%, Pro= 47-71%, T = 35-41%, Resp = 60-60%). The residents of one program outperformed others. Conclusion: There is variability by institution in mean scores of board review quizzes delivered by an ARS; however, this variability is generally small. Programs may choose to focus on areas of weakness as preparation for the IE. Objectives: We investigated the effect of an educational intervention on the ability of EM residents to perform the first step in a nerve block, accurate identification of the target nerve by US. Methods: In this prospective randomized trial, EM residents, years 1-3 made up to three attempts to identify six peripheral nerves (radial, ulnar, median, popliteal, interscalene, and supraclavicular) on one individual using US (Sonosite, Bothell, WA). After an initial attempt, half were randomized to receive an educational intervention (20 minutes of self-guided slide presentation) before their second attempt ('education alone'). The other half received the educational intervention before a third attempt ('education + practice'). Two US fellowship trained faculty observed all attempts to determine accuracy of identification. Data is presented as mean AE SD with 95% CI. Results: Ten EM residents participated in this study. The median number of correctly identified nerves on the first attempt was 0. Those who received the educational intervention before the second attempt correctly identified a greater number of nerves than those without the intervention (2.6 AE 1.7 v 0.6 AE 0.9; 95% CI 0.0485 to 3.95). Practice alone did not improve accuracy as demonstrated by no significant difference between attempts 1 and 2 in the group that did not receive the intervention. However, accuracy improved after the educational intervention regardless of its timing. After the educational intervention, correct nerve identification in the 'education only' v 'education + practice' cohorts was similar (2.6 AE 1.7 v 2.8 AE 0.84; 95% CI -2.13 to 1.73). The ability to correctly identify nerves by ultrasound is useful in performing peripheral nerve blocks but can be challenging in novice users. We demonstrate that while practice had no effect on accurate nerve identification, a self-guided educational intervention significantly improves the ability of residents to identify peripheral nerves. The Use of an Endovaginal Task Objectives: Our researchers utilized Doppler ultrasound to measure the velocity of blood flow through the superior mesenteric artery (SMA) in patients with sepsis, severe sepsis, and septic shock. The peak systolic velocity (PSV) and the resistive index (RI) of the SMA were calculated in order to assess whether one or both could predict progression to multi-organ system failure or illness severity as measured by ICU length of stay (LOS). Establishing baseline blood flow could then be used to predict which patients may go into multi-organ system failure and may benefit from early or more aggressive interventions to decrease morbidity and mortality. information. Our previous study of 3-year EM programs found significant variability between information posted on the program websites (PW) vs. the SAEM online residency directory (RD). We hypothesize a similar discrepancy for 4-year programs, as well as variability in curricula between 3-and 4-year programs. Objectives: To identify variations between information posted on the PW and the RD. To describe elements of PGY1-4 EM residency curricula. Methods: Observational study using the RD to identify all PGY1-4 allopathic residencies. Pre-determined elements of each residency program's curriculum were assessed using both the PW (gold standard) and the RD: ICU, pediatrics, inpatient wards, electives, orthopedics, toxicology, and anesthesia. Comparisons were made using a Cohen's unweighted kappa calculation. Comparisons with 3-year programs were made using previously collected data. Results: Thirty-four PGY1-4 programs were identified by the RD. Thirty of the 34 programs (88%) had complete curricula on both the PW and the RD. Only these programs were used in the kappa analysis. Sixteen of 30 programs (53%) had no discrepancies between their PW and the RD. Agreement between the two sources for 4-year programs was excellent (k=0.85 95% CI 0.8-0.9), compared to fair (k=0.26 95% CI 0.19-0.33) for 3-year programs. Analysis of PWs found that PGY1-4 programs have these average numbers of blocks: 4.4 of ICU (range=2-8.5), 4.5 of pediatrics (range=1.5-9.75), 3.24 of electives (range= 0-6), and 1.9 of inpatient wards (range=0-6.5). Orthopedics, toxicology, and anesthesia rotations are present in 85%, 85%, and 97% of programs, respectively. Compared to 3-year programs, 4-year programs offered 1.25, 1.4, 0.9, and 1.5 more blocks of ICU, pediatrics, inpatient, and electives, respectively. Conclusion: There is excellent agreement of online curricular information between program websites and the SAEM residency directory for PGY1-4 EM residencies, especially compared to PGY1-3 programs. Four-year programs offered more blocks of the rotations which were evaluated. Objectives: To identify the reasons for missed or delayed diagnosis of neurologic emergencies in the ED. Methods: This was a retrospective chart review of a convenience sample of patients with neurologic emergencies that were missed or delayed at one tertiary, academic ED with an annual volume of 55,000. Patients ! 18 with a non-traumatic neurologic emergency diagnosis after initially presenting to the ED and whose case was reviewed by the ED's Quality Assurance (QA) committee between January 2005 and June 2012 were included for analysis. Three EPs independently reviewed each case and determined the type of error that led to a misdiagnosis or delay in diagnosis. Proportions and confidence intervals were calculated. Conclusion: In this study, KG was the most common cause of error followed by SBI and CE. The difference was not statistically significant. EM residency programs should consider more training on the presentation and evaluation of neurologic emergencies. EPs must be aware of the pitfalls of diagnostic short cuts and reflect on their decision making to avoid CE. Given that radiology misreads by residents was the major SBI, EDs should consider requiring attending radiology reads for certain imaging. Although this study is limited by a small sample size, it is important for EPs to understand the causes of missed or delayed diagnosis of neurologic emergencies. Objectives: The goal of this study is to assess the current status of BU training and identify successful aspects of BU education. This study can help predict performance on the US milestone assessment and provide implications for the evolving role of BU training and practice. Methods: This was an observational, cross-sectional survey examining several aspects of senior EM residents' training and confidence with BU. Data were collected between April and June 2012 via an online survey first sent to the CORD listserv, and then snowball sampling was employed. Residents were asked about aspects of training, numbers and confidence with US applications, and to predict their future use of US. Descriptive statistics as well as chi-square and Fischer's exact tests were performed on study variables. The total number of responses was 270 and 258 surveys were included in the analysis. Approximately 93% reported having an US director and 61% had an US fellowship. All reported 24-hour BU access. A diverse range of teaching modalities appear to be utilized including didactic, hands-on, and independent study, with hands-on learning considered most effective. Obtaining high-quality images was rated the most difficult aspect (72%). More than two-thirds indicated that BU was frequently or almost always useful and almost 70% stated that they plan to use ultrasound during every shift after residency. Significant associations were observed between the number of US performed during residency and confidence for ten US applications (p < 0.05 for all). Involved patients' charts were reviewed for ED LOS, 24 hour return, disposition upgrade to ICU, and death in ED or within 10 hrs of admission. We used linear regression to determine the influence of OA, completeness of checklist (CC), questions from receiving team (QRT), and PGY on LOS; logistic regression was used to determine the influence of PGY, CC, QRT, and interruptions on the OA dichotomized to high (4-5) or low (1) (2) (3) . PARTICIPANTS: EM and rotating residents (PGY1-3). Attending physicians participated in all handoffs, but were not evaluated. Subjects were aware of trained observers, but not study aim. Results: There were 123 patient handoffs in 30 sessions, with a median of 5 (IQR 3.5-7) patients/session. Median OA was 4 (IQR 4-5 Objectives: To assess the feasibility and effect of widespread ED team training using simulation. Methods: ED team training was introduced in seven hospitals insured by a captive malpractice carrier: five academic centers, and two community hospitals. A curriculum team developed the ED team training course and was implemented in each of the hospitals by a core teaching team. The course consisted of core team communication concepts as well as an institution-specific communication tool that was implemented. Tools included trigger teams and physician/nurse huddles. Course evaluations and 3-month post course follow-up evaluations were collected using an online collection tool. Data were analyzed and proportions and 95% confidence intervals were calculated using Microsoft Excel. Objectives: To assess methods for evaluating competency in US in EM residency programs. Methods: Cross sectional study. A 20-item questionnaire on ultrasound competency assessment was developed based on existing literature and current emergency US education. It was reviewed by a biostatistician and four emergency physicians with expertise in US. The questionnaire was sent to emergency US directors as well as emergency medicine program directors and/or assistant program directors. The responses were reported in terms of the percentage of total respondents along with confidence intervals. Results: A total of 122 EM residency programs participated in this study. The questionnaires received represented a 75% response rate. Eighty seven percent (95% CI, 81%-92%) offer mandatory US rotation and require a specific number of US examinations for graduation. Fifty seven percent (95% CI, 48%-65%) assess competency only at the end of the US rotation. Twenty two percent (95% CI, 14%-29%) assess US competencies annually and 19% (95% CI, 12%-30%) every six months. Only 14% (95% CI, 8%-20%) use OSCE and 21% (95% CI, 14%-28%) use SDOT to assess resident competency in US. Thirty percent (95% CI, 21%-38%) administer a practical exam to assess US skills. Approximately one-third (33%, 95% CI, 24%-41%) use multiple-choice questions for assessment of competency. Only 32% (95% CI, 23%-40%) use the ACEP online interactive emergency US examination to assess resident competency. The majority of EM residency programs assess resident competency in bedside US. However, there is significant variation in the methods of assessment. As implementation of emergency medicine milestones continues, CORD consensus recommendations should be adopted. Objectives: To evaluate the use of an RVU-based physician productivity model when applied in an academic setting with variable job performance criteria. We performed a retrospective review of RVU data collected from an urban academic ED (annual census of 90,000) (August 2010 -August 2011). Physicians were salaried without incentive-based pay or productivity tracking. There was 48 hours of teaching physician coverage, 18 hours of pediatric coverage, and 24-36 hours of nonteaching physician coverage per day. Teaching physicians taught 2.5 residents on average and pediatric physicians 1 resident. RVUs generated by physician extenders were not considered. Work RVU/ hour and RVU/patient were calculated. The percentage that each physician spent working in the pediatric ED and as a non-teaching attending was calculated. Linear regression to predict RVU/hour was used to create a modifier to compare physician productivity when the different shift types were taken into account. Results: Our sample included four physicians providing dedicated pediatric care and 22 physicians providing comprehensive care. Physicians spent an average of 22% of their time in the pediatric ED (range 0%-100%) and 25% of their time as a non-teaching attending (range 0%-93%). The group averaged 5.0 RVU/hour (range 3.1-7.7), 2.0 patients/hour (range 1.1-2.6), and 2.6 RVU/patient (range 1.8-2.9). There was an inverse relationship between number of non-teaching shifts and average RVU/hour generated. On average, a non-teaching physician generated 2.5 RVU/hour less than teaching physicians (p=0.002). Physicians who work primarily pediatric ED shifts generated 0.47 lower RVU per hour than those working adult ED shifts (p=0.38). Conclusion: Using an RVU-based evaluation of productivity in an environment where physicians spend varying time teaching and seeing lower acuity patients complicates the evaluation of physician performance and limits the ability to provide clinical incentives. We found that teaching physicians could be nearly twice as productive as their non-teaching counterparts when no external motivation was available. Results: 605 articles had a mean BEEM score of 3.84. Articles were primarily diagnostic (27%) and therapeutic (59%), including 37% systematic reviews, 32% randomized controlled trials, and 30% observational designs. The citation rate and BEEM rater score correlated positively (0.144), with minimal correlation (0.053) between BEEM rater score and the JCR impact factor score. In the model, the BEEM rater score significantly predicted WoS citation rate (p<0.0001) with an odds ratio of 1.24 (95% CI of 1.106 -1.402). In additional models adjusting for the JCR impact factor score, the h indices of the first and last authors, number of authors, and study design, the BEEM Rater score was not significant (p=0.08). Conclusion: BEEM Rater score correlates with future citations. Future research should assess this instrument against alternative constructs of "best evidence." and 2012, six of which were assessed to individual physicians. The number of OIG monetary penalties levied has decreased steadily on average over 8% annually during this period (R 2 = 0.84), which demonstrates a statistically significant correlation (p-value 0.008). There appears to be no significant change over time in the type of violation that led to the monetary penalty as each category decreased proportionally with time. The "reason for visit" that led to the EMTALA violation also stayed constant, following the overall declining trend in relation to time. Conclusion: EMTALA remains a key legislative force in shaping modern emergency medical practice. The number of violations of all the categories identified has been small and decreased over the last 10 years. Whether this decrease is the result of the adaptation of the medical industry to the new rules or trends within the regulatory apparatus itself as the interpretive guideline of the law are amended warrants further study. Objectives: This study evaluated whether physicians' self-reported concern about medicolegal risk was associated patients' utilization in three clinical scenarios, and whether it was more closely associated than state-level indicators. Methods: A nationally representative group of 4,720 physicians completed a validated scale of concern about medicolegal risk. Results were linked to respondents' Medicare claims for that year. We identified visits for new complaints of chest pain, headache, and lower back pain, and measured utilization of relevant imaging, ED utilization (if the initial visit was not to an ED), and hospital admission within 7 days of the visit. We adjusted for patient and physician factors and calculated odds that a patient would receive each service based on the physician's malpractice concern score. We then calculated the odds of a patient receiving services based on state-level indicators of medicolegal risk. Results: We identified 8080 headache, 8645 chest pain and 17078 lower back pain visits to study physicians. For patients seen outside of an ED, higher physician concern about medicolegal risk was significantly associated with the odds receiving imaging and, in the case of chest pain, hospitalization. (see table) Stress testing followed the reverse pattern. For patients seen in the ED, no clear pattern was evident. Compared with physicians' concerns about malpractice, statelevel risk factors had few significant associations with utilization. Conclusion: For patients seen outside of the ED, high concern about medicolegal risk is significantly associated with higher utilization, suggesting that defensive medicine may have an important effect on health care costs. Previous estimates of defensive medicine relying on state-level indicators of risk should be reconsidered. These patterns were not seen in ED visits, perhaps because ED physicians had uniformly high levels of concern. Time of Day Objectives: By gathering data on the characteristics of patients who contact their doctors before coming to the ED, this study will help identify targeted populations that may benefit from alternative access to acute unscheduled care. Methods: This is a prospective cross-sectional study of adult ED patients who presented to a single tertiary care referral Level I trauma center with 115,000 annual ED visits. Consenting patients in each area of the ED were surveyed during a distribution of 2-hour periods throughout the day and evening 7 days/week. They were asked whether they had attempted to contact an outside provider prior to their ED visit. Emergency Severity Index (ESI), insurance status, age, and sex were correlated with these responses and compared using Pearson's chi-square and Fisher's exact tests. Results: Patients who did and did not contact doctors before coming to the ED were compared and no significant difference was found in patient age or sex. Of those patients with a primary doctor, 56% who presented to the ED during regular office hours attempted to first contact that doctor; during non-office hours, 30% attempted contact, p<0.001. There was also a significant difference in outside provider contact based on insurance type, p<0.001. See figure. There was a significant difference between those with higher and lower ESI levels (ESI 1,2,3 vs. ESI 4,5, p<0.001). Conclusion: Patients with a primary doctor were more likely to attempt to contact that doctor if their ED visit was during office hours. Patients' insurance status had a significant effect upon whether they chose to seek outside care prior to their ED visit, indicating a need to address insurance in relation to ED utilization. Surprisingly, patients with a less severe ESI level were less likely to have contacted a doctor prior to arrival, and may represent a population who would benefit from an alternative form of access to acute unscheduled care. Background: The incidence of errors in emergency medicine practice is not well understood due to a lack of systematic studies. In the 1980's, the Harvard Medical Practice Study found that 4% of hospitalized patients suffer some kind of adverse event and that 60% of these are preventable (associated with error); 2.9% of all adverse events occur in the ED, suggesting that 0.116% of hospitalized patients have an adverse event that happens in the ED. However, the actual the rate of medical errors and adverse events, particularly in the ED, is unclear. Objectives: To determine the rate of errors and adverse events in an academic tertiary care ED. Methods: Prospective data were collected in all patients presenting to an urban, tertiary-care academic medical center ED with a volume of 55,000 patients per year between 1/09-11/12. Cases were reviewed if they returned to the ED within 72 hours and were admitted on second visit, were admitted from ED to the floor and required transfer to the ICU within 24 hours, expired within 24 hours of ED arrival, required intubation, or were randomly referred due to patient or physician complaints. Cases were randomly assigned to individual physicians not involved with the case, who reviewed the case using a structured review tool consisting of questions about the presence of error and adverse events; responses were provided using a Likert scale. Sensitivity, specificity, and confidence intervals were calculated where appropriate. Results: 2131 cases met study criteria representing 1.4% of the patient volume seen during that time frame. An error rate of 9.5% was found in the reviewed population representing a composite error of 0.13%. The breakdown of results is summarized in the table below. Conclusion: Despite over 30 years of investigation and initiation of strategies to improve outcomes since the Harvard Medical Practice Study, medical errors in the ED remain a significant concern and often result in adverse events. New strategies for curbing medical error, in the ED in particular, are necessary to attempt to reduce these outcomes. Objectives: We sought to describe variation in admission rates for deep vein thrombosis between hospitals and to identify hospital-or community-level factors that affect the probability of hospital admission for DVT. Methods: We used state inpatient and ED databases (SID and SEDD) of the Healthcare Cost and Utilization Project (HCUP) to perform a retrospective cohort study of patients age 18 and older who presented to the ED of any acute, non-federal hospital in California in 2007 and were diagnosed with DVT 453.41, 453.42) . Patient-level data were merged with hospital-and county-level information from the American Hospital Association and the Area Resource File. The primary outcome was ED disposition. Hospital admission variation was quantified by the coefficient of variation (COV = standard deviation/ mean). Descriptive statistics were performed using ANOVA and chisquare. Hierarchical logistic random effects models were constructed to examine contributions of individual predictors. The COV in this study was three times larger than the US baseline, measured per state per 1,000 Medicare enrollees (0.547 vs 0.179). Higher probability of admission was associated with certain primary payers (Medicaid, Medicare, and "Other"), for-profit hospital status, increasing hospital size (10-bed increases), and patient age. Significantly lower admission rates were associated with increased supply of outpatient physicians and privately insured hospitals. (Table) Conclusion: Admission rates for patients with DVT vary significantly between hospitals, and this variation is associated with the community and the population that the hospital serves. Improvement in outpatient services available to patients may enable hospitals and communities to reduce inpatient utilization needs for certain conditions. Background: Governments in low-income countries face significant challenges to health care delivery. Central to the development of a comprehensive national health system is the need to systematize appropriate and timely patient referral and transfer from one health care setting to another. Successful referral processes rely on identification of severe cases, organization of transportation, communication between facilities, and prompt care at the receiving facility. Objectives: This study aims to characterize inter-facility patient referral processes at a representative sample of health clinics, health centers, and referral hospitals in the most populous county in Liberia. We collected and analyzed data from a cross-sectional health referral survey in Montserrado County, Liberia by direct interview with a qualified director of each health facility. The survey included baseline hospital data, number and type of referrals by discharge diagnosis, referral guidelines, distance to referral facility, commonly used transportation and cost, and methods of communication with receiving facilities. Health facilities were stratified by level of care and perceived deficits in the referral process were compared. Objectives: This study aims to evaluate the academic contribution from Taiwan emergency departments (EDs) and in the EM field from Taiwan by analyzing the scientific publications in the past 20 years. Methods: Design: This was an observational study. Setting: All data were collected from SciVerse Scopus database. Type of participants: All articles published in the journals in 2010 Journal Citation Reports (JCR) category of EM between 1992 and 2011 were included. For articles that the first or corresponding authors were from Taiwan EDs published in 2010 JCR journals in the same period were enrolled. Data Collection: A computerized literature search was conducted on 17 March 2012. The search terms used were "ISSN (xxxx-xxxx) AND PUBYEAR AFT 1991 AND PUBYEAR BEF 2012". The articles originated from Taiwan were retrieved by added the limitations of "AND AFFILCOUNTRY (Taiwan)." The Taiwan EDs originated articles were retrieved by the search term "AFFILCOUNTRY (Taiwan) AND AFFIL (emergency) AND PUBYEAR AFT 1991 AND PUBYEAR BEF 2012". We collected data on articles including publication journal, publication year, and author affiliation. Data Analysis: Linear regression was used and the slope (b) of the linear regression was adopted as representative of trends. Results: The publication numbers in EM journals from Taiwan and all EM journals between 1992 and 2011 increased from 2 to 86 and 1008 to 4112 respectively. The trends (b) and 95% confidence intervals of the numbers of publication from Taiwan and the ratio of article number to all EM publications were 6.195 (4.705 to 7.686) and 1.529x10 -3 (1.126x10 -3 to 1.932x10 -3 ). All p-values were <0.001. These articles were equally contributed by emergency and non-emergency physicians. The article numbers from Taiwan ED in EM journals reached a peak in 2008 but the overall articles in all JCR journals is still increasing in 2011. In the past 20 years, the academic contribution from Taiwan to EM field has increased substantially. The research topic and quality of studies of Taiwan EDs were both recognized by EM field and other medical specialties of the world in recent years. Objectives: To develop, implement, and evaluate the effect of an ETAT-based triage process in the HNPB PED. Methods: HCWs at HNPB and consultants from TCH used the ETAT three-level triage system (emergent, priority, non-urgent) after 80% of HCWs had been trained in it. The number of PED patients with an assigned triage category indicated uptake. Key local and regional stakeholders chose these indicators to measure effect: hospital admission rate, inpatient length of stay (LOS), and mortality. We reviewed a random sample of charts 1 year before and after intervention and charts for all acutely ill children (serious diagnoses or admission to pediatric intensive care unit (PICU)) during pre-and postintervention periods. Age, sex, triage category, diagnosis, and disposition were noted. Results: There were 466 and 561 records in the pre-and post- It allows learners to experience realistic patient situations without exposing the patients to the risks inherent in on-the-job training. Numerous studies have evidenced the benefit of simulation in the training of residents in procedural competency and disaster medicine. A common misconception, however, is that the utility of simulation is restricted to settings with expensive high-fidelity manikins and equipment. Objectives: Develop and demonstrate the utility of a course specifically for the low-resource setting that is easily transportable, requires a minimal amount of equipment, and is low-cost and readily adaptable to a variety of global settings. Medicine Section of the American College of Emergency Physicians (ACEP) developed a low-resource, low-fidelity simulation didactic that was presented at the ACEP Scientific Assembly in Denver, CO in October 2013. Learners rotated through a series of three case simulations and one procedural station with two task trainers. Pre-and post-simulation surveys were used to measure effectiveness of the course. This study was IRB approved by the JHU SOM. Results: A total of 23 participants completed the survey, with 13 based in the US, and 10 from outside the US. Participants had clinical experience in over 30 countries internationally. No participants had previously taught simulation in an international setting. All cases not only ranked highly for the successful transfer of knowledge, but also demonstrated effectiveness in highlighting cultural considerations of care (table). All participants reported that they would apply the demonstrated techniques to their domestic practice and to teaching in the low-resource setting. The development of simulation courses to improve practice in low-resource settings is unique and necessary. This study shows that an effective low-resource simulation course can be developed. Simulation will not only allow transfer of clinical pearls of practice but may also allow exposure to some of the cultural/ethical dilemmas one may face. The course could be used in the pre-departure setting for traveling health care providers, as well as in-country to increase the capacity of local health care providers. Background: The NIH Stroke Scale (NIHSS) score, a measure of stroke severity, is the strongest predictor of outcome after ischemic stroke (IS). After the NINDS trial, recombinant tissue plasminogen activator (rt-PA) remains the only approved therapy for improving outcomes after IS. Using latent class analysis and data from our epidemiology of stoke study, which includes strokes of all severity, we recently determined that grouping strokes by clustering of dichotomized presence or absence of individual NIHSS symptoms could provide additional prognostic information beyond the total NIHSS score. Particularly, we found that two symptom profiles with identical median NIHSS scores had widely disparate outcomes. Objectives: We conducted a re-analysis of the NINDS trial to determine the effect of NIHSS symptom profiles on the observed differences in patient outcomes. Methods: This was a secondary analysis of the NINDS trial which randomized 624 IS patients to either IV rt-PA (n=312) or placebo (n=312) within 3 hours of symptom onset. We determined the proportion of the six previously identified distinct NIHSS symptom profiles in the rt-PA and placebo arms of the NINDS trial, and re-analyzed patient outcome in each arm. In particular, we were interested in the interaction of trial arm with symptom profiles, which were classified as A through F. The primary outcome measure in the NINDS trial was the proportion of patients with modified Rankin score of 0 or 1 at 90 days. Results: Of the rt-PA patients, the proportions of patients with each profile were 34% A, 29% B, 9% C, 13% D, 11% E, 4% F; of the placebo patients, the proportions of patients with each profile were 37% A, 34% B, 7% C, 12% D, 8% E, 3% F. The proportion of patients with each profile did not differ significantly between treatment arms (p-value = 0.50). No significant interaction between treatment and profile membership was found. After adjusting for the proportion of patients in each arm with different symptom profiles, rt-PA treatment remained significantly associated with improved outcome at 90 days. Conclusion: Our findings show that IV rt-PA was beneficial across all symptom profiles. However, a limitation of this study is that these profiles were originally developed using retrospective NIHSS scores and further validation of profile patterns in the prospective setting and more severe strokes may be warranted. Objectives: The purpose of this study is to assess the safety and efficacy of tPA administration in patients presenting to the emergency department within 4.5 hours of symptom onset who require blood pressure management prior to tPA administration. We performed a retrospective chart review of patients presenting to the ED from 2004-2011 who were treated with tPA for AIS and evaluated the outcomes of patients requiring "aggressive", "standard", or no blood pressure management prior to treatment. Results: A total of 427 patient records were included in the analysis: 273 required no blood pressure management prior to tPA administration, 65 required "standard" blood pressure management, and 89 required "aggressive" blood pressure management prior to tPA administration. Patients requiring any BP control were more likely to be women and have a history of hypertension. The rates of symptomatic intracranial hemorrhage (sICH), in-hospital deaths, or good neurologic outcomes were not statistically significant between these groups after multivariate analysis. When comparing "standard" vs "aggressive" blood pressure control groups, the group requiring "aggressive" blood pressure control had higher initial blood pressure and baseline NIHSS. The rates of sICH, in-hospital deaths, or good neurologic outcomes were not statistically significant between these groups after multivariate analysis. There was also no statistically significant difference in door to tPA time for group requiring blood pressure management vs the group not requiring blood pressure management (71.5 min vs 69 min) Conclusion: Administration of tPA in patients presenting with AIS requiring blood pressure management, even "aggressive" measures, appears to be safe and does not seem to be associated with worse outcomes, and does not need to be associated with delays in administration of tPA. Thus, the need for blood pressure management prior to tPA administration should not be an exclusion criteria for patients presenting with AIS. The Efficacy of Intravenous Morphine for Acute Migraine Benjamin W. Friedman Albert Einstein College of Medicine, Bronx, NY Background: Opioids are the type of medication used most commonly to treat migraine in the ED, despite the fact that opioids are linked to ED recidivism. Alternate therapies are available, but no current medication enables the ultimate therapeutic goal for more than 50% of patients, that is, rapid and sustained headache freedom for 24 hours. Furthermore, all commonly used acute migraine therapeutics may cause serious adverse effects. We therefore decided to determine the efficacy of intravenous morphine, the prototypical opioid analgesic, which is considered far less euphorogenic than hydromorphone and meperidine, the opioids commonly used for acute migraine. Our hope was that this medication would provide good efficacy without causing return visits to the ED during the same week. To the best of or knowledge, there are no published data concerning the efficacy of parenteral morphine for acute migraine. Objectives: To determine the efficacy of IV morphine for acute migraine and the frequency with which morphine is associated with a return visit to the ED. Methods: This was a prospective cohort study. Patients meeting International Headache Society (IHS) migraine criteria were enrolled if their pain was moderate or severe and they had not taken opioids prior to ED presentation. All patients were administered 8 mg of morphine as an IV drip over 15 minutes. Using a structured questionnaire recommended by the IHS, pain was assessed in person at baseline, 1, and 2 hours after medication administration, and by telephone at 24 hours. A final evaluation was done by telephone 7 days after the ED visit, during which the number of return visits to the ED was assessed. Rescue medication use was elicited from the attending physician at the time of ED discharge. Headache relief was defined as obtaining a headache level of mild or none within 2 hours. Outcomes are reported as percentages with 95%CI. Results: Twenty-one patients were enrolled. All were successfully followed after the ED visit. Seven (33%, 95%CI: 13, 53%) patients obtained headache relief. Thirteen (62%, 95%CI: 41, 83%) required rescue medication. One patient achieved headache freedom and maintained it for 24 hours (5%, 95%CI: 0, 14%). Two patients were forced to return to the ED for management of their headache (10%, 95%CI: 0, 23%) We systematically reviewed PubMed, Embase, Google scholar, and Cochrane Central Registry of Controlled Trials from inception through 2012 using the terms "tension type headache" and "parenteral or subcutaneous or intramuscular or intravenous". We identified randomized trials in which one parenteral treatment was compared to an active comparator or to placebo for the acute relief of TTH. We only included studies that distinguished TTH from other migraine. The primary outcome for this review was measures of efficacy 1 hour after medication administration. One reviewer extracted data, and a second reviewer verified the data for accuracy. Discrepancies were resolved by a third. We assessed the internal validity of trials using the Cochrane risk of bias tool. Because of the small number of trials identified, and the substantial heterogeneity among study design and medications, we decided not to combine data or report summary statistics. The results of individual studies are presented using number needed to treat (NNT) with 95%CI when dichotomous outcomes were available and using continuous outcomes otherwise. Results: Our search returned 551 results, and 163 abstracts were reviewed. Seven studies involving 397 patients were included. The most common reasons for exclusion of abstracts were use of non-parenteral medications only, no assessment of acute pain relief, and no differentiation of headache type. Risk of bias ranged from low to high. The following medications were more effective than placebo for acute pain (NNT, 95%CI): chlorpromazine (4, 2-26), dipyrone (4, 2-26), and metoclopramide (2, (1) (2) (3) . L-NMMA, a NO synthase inhibitor, also outperformed placebo, as measured using percent improvement in pain score. The following medications were not consistently more effective than placebo: mepivacaine, meperidine, and sumatriptan Objectives: We addressed the first hypothesis by determining whether improvement of headache is correlated with improvement in BP among patients who present to an ED with both elevated BP and headache and are treated with IV metoclopramide. Methods: Two migraine RCTs were conducted in four different academic EDs in the New York City area. In each trial, patients meeting International Headache Society migraine criteria were treated with metoclopramide 10, 20, or 40 mg IV with diphenhydramine 25 mg. Some patients also got dexamethasone 10 mg IV. At baseline, all subjects had pain assessed using an 11-point verbal scale (0 -10) and BP measured using an automated sphygmomanometer. All patients then had BP and pain re-assessed one hour later. No patients were administered anti-hypertensive agents. Patients were included in this analysis if they had a baseline systolic BP above 150 mmHg or baseline diastolic BP above 95 mmHg. Improvement in pain score was calculated as one-hour pain score subtracted from baseline pain score. Similarly, improvement in BP was calculated as one hour systolic and diastolic BP subtracted from the baseline systolic or diastolic BP. Correlation between improvement in pain score and improvement in systolic and diastolic BP were graphed and measured using Spearman's rho. Results: Of 550 patients enrolled in the two migraine trials, 98 (18%) met our definition for elevated BP. Among this group, mean baseline pain score was 8.2 (SD 1.9). Mean one hour pain score was 3.6 (SD 3.1). Systolic BP improved by a mean of 14.2 (SD 17.0) while diastolic BP improved by a mean of 9.7 (SD 13.9). For change in systolic BP and pain relief, rho was -0. 1 (p=0.47) . For change in diastolic BP and pain relief, rho was 0.0 (p=0.99). Conclusion: Among patients who present to an ED with elevated BP and an acute migraine and are treated with an anti-migraine medication, pain relief is not associated with improvement in systolic or diastolic BP. Methods: Eligible subjects were undergoing a law enforcement training class that included a) 5-second TASER CEW application, b) a 100-yard sprint with directional changes (RUN), c) a 45-second resistive fight against an opponent (RES), d) abhide and bite exercise with a LEO canine (DOG), or e) a 10% oleoresin capsicum spray to the face (OCS). Volunteers underwent a baseline SFST (three-part test) from a qualified LEO. They were then randomized for data collection during one of the five exposures. At 15 minutes post-completion of the task, they received another SFST for comparison. SFST scoring was on a pass/fail basis per certified standards with detailed recording of any parts that were failed. Test performance was compared using Fisher's exact tests. Results: Fifty-seven subjects were enrolled (median age 31.9 years, range 19 to 55, 89% male): 13 CEW, 10 RUN, 12 RES, 11 DOG, and 11 OCS. Three subjects failed the SFST prior to the task exposure, one in each of RES, OCS, and RUN groups. All subjects passed the SFST 15 minutes after the exposure. There was no worsening of SFST performance post-task in any of the groups. Methods: Retrospective chart reviews were conducted per query of electronic medical records at a university tertiary referral center and urban community hospital by a single non-blinded abstractor specifically trained in data collection and calculation of ICH volume. The most recent 100 cases of ICH were identified at each site by ICD-9 codes 431 and 432. 9 (2007-2012) . SAH and ICH associated with trauma or neoplasm were excluded. Non-contrast CT brain studies obtained at presentation were reviewed and ABC/2 scores calculated to estimate ICH volume in mL. This method estimated ICH volume using a previously published ellipsoid function with interrater and intra-rater ICC of 0.99. ICH volume was log transformed and a linear regression model with the following covariates (believed a priori to have a biological relationship with ICH) was fitted: age, sex, race, race*age (interaction term), history of hypertension, and SBP. Results: Two hundred total subjects were identified. Twenty were excluded because races other than African American and Caucasian were small. Overall mean ICH volume was 42.6 mL (SD AE65) and was similar between groups (mean 41.5 and 43.4 mL, respectively). The overall population was 54% female and 45% male with a mean age of 65 (SD AE16), each of which were nearly identical in both groups. The results of the regression model are presented in the table. The predicted values from the model for the age*race interaction are presented in the figure. Conclusion: There was no statistically significant age*race interaction on ICH and it appears that younger African Americans do not suffer larger volume ICH. However, older African Americans may have larger ICH by volume than Caucasians, an interesting hypothesis that should be evaluated with larger data sets. (CTA) assists clinicians in assessing suitability for tissue plasminogen activator (tPA). Controversy exists whether the intravenous contrast given for CTA predisposes to increased intracranial hemorrhage (ICH) and poor outcome after tPA use. Objectives: This study tests the hypothesis that CTA prior to thrombolysis results in an increased rate of ICH. We observed the association of CTA with hemorrhagic transformation (HT), parenchymal hematoma (>30% of infarct zone with mass effect) (PH2), symptomatic intracranial hemorrhage (sICH), and death. Methods: This is a retrospective database review from the Specialized Programs of Translational Research in Acute Stroke (SPOTRIAS) trials registry. SPOTRIAS is a multi-center research group with a registry database established in support of stroke research trials. All patients present in the registry who received thrombolysis for AIS and were not enrolled in another interventional clinical trial were included. Outcome measures between groups receiving and not receiving CTA prior to thrombolysis were analyzed with chi-square and Fisher's exact tests. Results: Among 1014 patients who received tPA, 473 of those patients received a CTA prior to tPA administration. Baseline patient clinical characteristics were similar between groups. CTA was associated with an increased risk of any ICH compared with patients who did not receive a CTA (11.0% vs. 8.1%, p = 0.12). However, the rate of PH2 was lower in the CTA group (2.1% VS. 4.6%, p = 0.03). Accordingly, the rate of sICH was also lower in the CTA group (3.6% vs. 5.5%, p = 0.14). This resulted in an overall lower rate of death in the CTA group (7.2% vs. 12.6%, p = 0.005). Conclusion: After intravenous thrombolysis, there were no statistically significant differences in ICH. However, significant differences in both death and PH2 were noted between groups. Additional statistical power may confirm or refute observed nonsignificant trends. Further research is also needed to determine potential confounding effects from selection bias and clinical characteristics undetected by this registry review. Objectives: To assess the effect of microEEG (a novel ED-friendly EEG device) on clinical management and outcomes of ED patients with AMS. Methods: Randomized controlled trial at two urban teaching hospitals. Inclusion: Adult patients (>18 years old) with AMS. Exclusion: immediately correctable cause of AMS (e.g. hypoglycemia). Patients were randomized to routine care (control) or routine care plus EEG (intervention). A microEEG was recorded on patients assigned to intervention group upon presentation by research assistants and the results were reported to ED attending by an epileptologist within 30 minutes. No protocol for workup or treatment was specified for either group. Outcomes: micro-EEG results, change in ED management (changes in differential diagnosis, diagnostic work-up, and treatment plan from enrollment to completion of initial work-up and to disposition), length of ED and hospital stay, ICU requirement, and inhospital mortality. Statistical analysis: Data are reported as percentages with 95% confidence intervals for proportions. Baseline characteristics and length of ED stay were compared using Fisher's exact and Mann-Whitney tests. Changes in differential diagnosis at specified time points and other outcomes will be compared between groups using chisquare. Results: Preliminary analysis performed on 72 patients (30 controls and 42 interventions). Target sample size: 130 (65 patients/group, enrollment 90% completed to date). Patients in two arms had comparable characteristics at baseline (age, sex, history of seizure, seizure in the field or in the ED, anticonvulsants in the field or in the ED, new neurological findings, and acute abnormal head CT). EEG in the intervention arm revealed abnormal findings in 88% (75-95%) including NCS in 5% (1-17%). ED length of stay was not significantly different between the two groups. The effect of microEEG on clinical management in the intervention arm is partly shown in the table. Conclusion: EEG is a useful diagnostic test for ED patients with AMS and it can affect clinical management (diagnosis and treatment) of these patients. Objectives: To determine the change in mean IV t-PA rate (intravenous tissue plasminogen activator) in EMS-transported acute ischemic stroke (AIS) patients after implementation of a county-wide EMS routing protocol, and examine if regionalization was independently associated with change in treatment rate. Methods: This is a before-after observational study of AIS patients admitted to hospitals within two northern California counties during a 3-year period. Patient records were obtained from the discharge abstract file of the statewide administrative database and were linked to prehospital patient care records using probabilistic linkage methodology. Discharge diagnosis of stroke was identified using validated codes for AIS and thrombolytic use was determined by procedure codes found in the discharge database. Both direct hospital admissions and inter-facility transfers were excluded. Mean rate of IV t-PA for the time periods were calculated. Independent association of regionalization status and IV t-PA rate was examined after controlling for patient and hospital demographics, stroke center designation, teaching status of the hospital, patient residence, and day of the week. Results: EMS transported 6181 patients with a primary or secondary diagnosis of stroke. Mean age at time of admission was 74(+/-15) years; 54% (n=3312) were female, and 63% (n=3870) were Caucasian. The majority (70%, n=4132) of patients were treated at stroke centers and 97% (n=6005) were treated at community hospitals. Among EMStransported patients, the IV t-PA rate did not increase after the implementation of the routing protocol (pre-protocol phase 2.82%, postprotocol phase 2.85; p-value=0.95). After controlling for patients demographics, stroke center status, teaching status of the hospital and the weekend effect, prehospital routing protocol implementation was not independently associated with an increased rate of IV t-PA administration in AIS patients (OR 0.96, 95% CI: 0.63-1.47). Conclusion: Our preliminary findings suggest that thrombolytic rates did not increase after implementation of EMS routing protocol for stroke. Results: Among 1023 enrollments, 83 (8%) resulted in a discharge diagnosis of PS (69) or oNES (14). Twenty of 83 enrollments occurred in patients previously enrolled in the trial (a re-enrollment rate of 24%, as compared to the overall study cohort re-enrollment rate of 13%). Patients with PS/oNES were younger (35 v. 44 years) and more likely to be female (65% v. 45%) than the overall study cohort. Ten patients <18 years old (8% of pediatric patients) were diagnosed with PS (8) or oNES (2) . Diagnosis of PS/oNES followed ICU admissions in 9 (11%) and hospital admissions in 30 (36%) of enrollments. There were two endotracheal intubations (2.4%), both in patients with oNES, and one death in a patient with oNES related to intracerebral hemorrhage. In 59% (95% CI 49-70%) of enrollments with PS/oNES the patient was determined to have stopped "seizing" without the use of rescue drugs at ED arrival as compared to 68% (95% CI 65-71%) in the overall study cohort. Conclusion: About 8% of adults and children treated for status epilepticus in the prehospital setting are ultimately diagnosed as seizure mimics. Treatment with benzodiazepines in the prehospital setting is similarly effective in these patients as compared to patients ultimately diagnosed with status epilepticus. The rate of endotracheal intubations is also low. Recidivism in this population is common. Objectives: To characterize ED utilization and differences in risks and outcomes between pregnant women who use vs. do not use the ED during their perinatal period. Methods: Secondary analysis of data from a Midwestern county-wide post-partum depression study. County residents giving birth Feb-May 2009 were systematically recruited from the postpartum hospital floor for a 2-month postpartum phone survey and medical record review. Trained abstractors, monitored for intercoder reliability, collected demographics, OB/Gyne and ED utilization, and pregnancy outcomes. Standard descriptive statistics were used to identify differences between women who used vs. did not use the ED in the peripartum period, defined as 8 weeks gestation to 4 months postpartum. Logistic regression, adjusted for demographic factors, was used to identify the association between ED use, postpartum care, and depression. Results: 670/906 (74%) of eligible postpartum women, demographically representative of the county's full birth population, were enrolled; 643 (96%) completed the postpartum survey. Among participants, 218 (33%) used the ED at least once and 108 (16%) had more than one ED visit during their peripartum period. Of 520 ED visits generated by this group, 49% were for non-obstetric illness, 38% for obstetric-postpartum reasons, 10% for injury, and 3% for substance/ mental health concerns. Compared to non-ED users, ED users were significantly more likely to be teenagers, single, black, Medicaid-insured, to smoke, use drugs, and to have an abuse history, insecure housing situation, late entry into prenatal care, inadequate prenatal weight gain, and higher rates of premature birth. Adjusting for demographic factors and insurance status, ED users were more likely to screen positive for postpartum depression, OR=2.9 (1.71-4.99) , and fail to attend a scheduled postpartum visit, OR 0.48 (0.250-0.94). They were equally likely to have completed recommended well-child care visits and immunizations. Conclusion: Among a fully insured sample of pregnant women, an ED visit was a marker for poor perinatal outcomes and need for integrated psychosocial interventions in the perinatal period. is an expeditious, low-cost intervention for treating stable miscarriage (also known as missed abortion) that is widely used in outpatient settings and developing countries, but may not be fully exploited in US emergency departments. Objectives: We aimed to characterize women who both present to emergency departments with stable miscarriage and receive ambulatory point-of-care MVA (EDMVA). We also aimed to describe the characteristics associated with EDs that provide this service. Objectives: We sought to determine whether ondansetron or the combination of doxylamine plus pyridoxine is superior for treating NVP. Methods: We performed a prospective, randomized, double-blind, controlled study of women in the first trimester of pregnancy requesting treatment for NVP. Prior to treatment, subjects graded the severity of both nausea and emesis on two, 100-mm visual analog scales (VAS) and were then randomized to treatment with either: one tablet of ondansetron 4 mg plus a second (placebo) tablet (O group); or one tablet of pyridoxine 25 mg plus one tablet of doxylamine 12.5 mg (P+D group) every 8 hours for 5 days. All study medications were identical in appearance. A VAS was repeated 5-7 days after treatment to assess degree of nausea and vomiting and any adverse effects over the treatment period were recorded. Primary outcome was reduction in nausea on VAS. Secondary outcomes were reduction in vomiting and number of patients reporting sedation. Means and standard deviations were calculated, and groups were compared using a rank sum test. Results: A total of 17 patients completed the study ( Objectives: The objective of this study was to examine resident ED physicians' comfort levels in diagnosing and treating MDD as compared to diagnosing and treating other commonly presenting medical illnesses in the emergency department, hypertension and diabetes. Methods: We examined levels of comfort in diagnosing MDD in comparison to hypertension and diabetes using a survey with a fivepoint Likert Scale. We also examined comfort levels with prescribing medications for MDD compared to hypertension and diabetes in three scenarios: (A) without follow-up (no primary care provider/PCP), (B) with a PCP available for follow-up (but without speaking to him/her), and (C) with an available PCP who is reachable by phone to discuss a treatment plan. We had a total of 20 resident respondents to our survey. Objectives: To quantify the frequency of mental health diagnoses among victims of violence and characterize patients receiving these diagnoses and the institutions where they receive care. We used the 2009 KID dataset, an all-payer, nationally representative sample of hospital discharges of youth aged 20 years and younger. Patients between the ages of 10-20 who were admitted for violent injuries were identified using the CDC-recommended injury E codes. Patients were coded for the presence of one or more of the following four mental health diagnoses: acute stress disorder (ASD), posttraumatic stress disorder (PTSD), depression, and substance abuse. The sample was subsequently analyzed using descriptive statistics, logistic regression, and hierarchical linear modeling, after applying appropriate survey weights. The sample was representative of 19,399 youths who sustained violent injuries; 35% with stab wounds, 18% with firearm injuries, and 44% struck by an object. The median age of the cohort was 18 and the median length of stay was 2 days. Eighty five percent were males and 42% were black, 27% white, and 24% Hispanic. Most institutions were non-trauma centers (43%), followed by Level I trauma centers (39%).The majority of patients received no mental health diagnosis (69%), whereas 30% were diagnosed with substance abuse, 1% with depression, 0.1% PTSD, and 0.5% ASD. Objectives: To measure the prevalence of legal needs, and whether legal needs were associated with demographic characteristics, poor health, non-English language preference, or other factors in the patient population at a hospital-based emergency department (ED). Methods: Over a 1-week period, a 29-item written survey (English and Spanish) was offered to families in the waiting room at the University of Colorado Hospital ED. Questions addressed demographics, health status, and language preference. Patients were asked if they had experienced any of 13 legal problems in the past 12 months, grouped according to the National Center for Medical-Legal Partnerships' classification system (income, insurance, housing, education, employment, legal status, family safety). Patients were asked about access to legal services and the effects of legal problems on their health. Survey responses were summarized using proportions and 95CIs; ORs and 95CIs were used to test for associations between patient characteristics and legal needs. Results: Surveys were collected from 325 patients. Most responded in English (90%) and were < age 40 (61%), female (60%), and non-Hispanic (69%). Significant proportions never graduated from high school (17%), earned < $15,000 annually (41%), reported at least one chronic medical condition (43%), and listed their health as only "fair" or "poor" (30%). Overall, 64% (59-70%) reported at least one legal problem in the past 12 months. Of these, 33% (26-39%) stated that their legal problem(s) had an adverse effect on their health; 89% (84-93%) had no access to legal services. Legal problems were associated with annual income less than $25,000 (OR 2.17, 95CI 1.32-3.57), unemployment (2.29, 1.34-3.91) , and unstable housing (3.76, 1.28-11.05). In this ED survey, unmet legal needs were common and may be associated with adverse health outcomes. Additional studies are needed to ascertain the effect of screening for legal need, and of medical-legal partnerships on health outcomes. Caring Background: Homeless patients visit the ED at rates up to 12 times higher than comparable housed patients, yet there is a paucity of research on how this influences emergency medicine residents, who are the primary physician caregivers in many EDs. Objectives: To characterize the experiences of EM residents in caring for homeless patients and explore how these experiences influence resident personal and professional development. Methods: We conducted in-depth interviews with residents of two northeast urban EM residency programs. A random purposeful sample diverse in training year was selected, with sample size determined by theoretical saturation. Interviews were digitally recorded and professionally transcribed. A core team of three researchers with diverse content-relevant expertise independently coded transcripts and met regularly to reconcile coding differences. The constant comparison method was used to identify new codes and refine existing ones iteratively. The final code structure was applied to all data using Atlas.ti (GmbH). Results: Four core themes pertaining to the resident experience emerged from 23 interviews. First, residents learn how to care for homeless patients through modeling more senior physicians, storytelling, and experience, rather than formal curricular training. Second, residents learn unique aspects of EM by caring for homeless patients. For example, residents learn to integrate social and systems-level factors into medical decision making (i.e., considering homelessness in disposition decisions). Third, residents struggle with role boundaries as emergency physicians when caring for homeless patients. Though the ED regularly fills gaps in the social service system by providing shelter, food, and other non-medical resources, residents vary in how much of this care they embrace as their job. Finally, caring for homeless patients affects residents emotionally. While residents feel pride in EM's mission to serve all patients, they feel frustrated by what they perceive as a limited ability to make a difference in the lives of homeless patients. Objectives: To determine the prevalence of alcohol intoxication in the ED as well as the treatment needs of such patients. Methods: For this cross-sectional study, all patients presenting to the ED of an urban, Level I trauma center from 11/1/2012-11/8/2012 were prospectively screened by trained research associates. When an intoxicated patient was identified, study information including patient demographics, diagnoses, vital signs, blood alcohol concentration (BAC) by breath, GCS on arrival, ability to ambulate independently on arrival, use of physical or chemical restraint, previous ED visits, length of stay, and health insurance was collected. Data was analyzed using descriptive statistics. Conclusion: During this study period, 15% of patients who presented to the ED were intoxicated with alcohol. Overall, 81% of patients were deemed to require treatment in the ED. We will continue screening to determine the prevalence, treatment needs, and need for agitation treatment of ED patients with alcohol intoxication. Objectives: To assess the effect of a wireless incentive payment on subject retention, satisfaction, and safety compared to those paid with cash. Methods: A prospective cohort study using longitudinal data collected during a large RCT of an IPV intervention. Female patients age 18-64 enrolled from an urban ED setting are compensated for completing 12 weekly automated phone surveys. A natural experiment occurred as the first 112 participants enrolled received in-person cash incentives followed by 103 participants enrolled using a wireless incentive structure with generic bank cards. A backward elimination GEE Model, adjusted for demographics, was used to examine the association between payment type, and the number of calls completed over the 12-week period. At 3 months, participants were asked if study participation affected their safety and about their satisfaction with the study and incentive structure. Objectives: We sought to evaluate the utilization and safety of a treatment protocol for low-risk PE in an EDOU. We performed a prospective, 18-month evaluation with 30-day telephone follow-up for all patients placed in our EDOU for the treatment of PE between December 1, 2010, and May 31, 2012. We created a treatment protocol for our EDOU for patients diagnosed with PE in our ED and who were judged to be low-risk by the Pulmonary Embolism Severity Index (PESI I and II). This protocol included telemetry monitoring, initiation of anticoagulation, performance of an echocardiogram, bilateral lower extremity duplex ultrasound, and consultation by the hospital's thrombosis service to arrange outpatient follow-up. Primary outcome measures included inpatient admission and any complications during the EDOU stay or follow-up period. Results: Twelve patients were placed in the EDOU for PE during the 18-month study period. Average age was 42 years old and 75% were male. Six patients (50%) were admitted to an inpatient unit following the EDOU stay. Reasons for inpatient admission included: hypoxia/ worsening dyspnea (2), right ventricular strain on echocardiogram (1), large clot burden on duplex ultrasound (1), and lack of availability of testing/thrombosis service consultation during the EDOU stay (2) . There were no adverse events in the EDOU. Of those patients treated and discharged from the EDOU, all patients reported compliance with outpatient follow-up, and none of the patients reported hospitalization or adverse events during the 30-day follow-up period. Conclusion: Utilization of the PE treatment protocol in our EDOU was surprisingly low (<1 patient/month). While the overall inpatient admission rate was high, some of these cases related to logistical issues rather than medical concerns or complications. Further evaluation of an EDOU PE protocol may continue to demonstrate the safety and efficiency of this approach when compared to inpatient admission. Evaluation of an Age-adjusted D-dimer Threshold in the Diagnosis of Acute Venous Thromboembolism Joel C. Rowe, and Michael R. Marchick University of Florida, Gainesville, FL Background: The traditional D-dimer cutoff for diagnosis of acute venous thromboembolism (VTE) is quite sensitive, but lacks specificity, resulting in a high proportion of patients who undergo negative CTs. It is known that D-dimer levels typically increase with age, suggesting that a single D-dimer cutoff for all age groups is inappropriate and could potentially lead to unnecessary CT scanning. Objectives: The authors hypothesized that an age-adjusted D-dimer cutoff (0.5 ng/ml for those 50 and under and age*0.01 ng/ml for those over age 50) could improve the specificity of the assay for VTE diagnosis without unsafely sacrificing sensitivity. The miss rate considered tolerable by the authors was that proposed in developing the Pulmonary Embolism Rule Out Criteria (1.8%). Methods: Retrospective chart review of a consecutive cohort of ED patients at an academic referral center who had a D-dimer ordered as part of a diagnostic workup for acute VTE. Adult patients who presented between January 2006 and May 2010 were included. Age, Ddimer, final interpretations by board-certified radiologists, discharge diagnoses, and therapeutic interventions were recorded. Patients with a discharge diagnosis of acute VTE who received anticoagulation for an apparent acute VTE after imaging were considered VTE+. Results: 5,556 patients had D-dimers ordered during the study period, of whom 810 had mildly elevated D-dimers (between 0.5 and 1.0 ng/ml), and subsequent imaging. Twenty-six (3.2%, 95%CI 2.0-4.4%) of these patients were determined to be VTE+ (25 with pulmonary embolism (PE), 1 with deep venous thrombosis). Of these patients, 130 were identified with D-dimer levels >0.5 ng/ml but less than an ageadjusted cutoff. Four (3.1%, 95%CI 0.1-6.1%) were determined to have acute PE. Two of these patients had subsegmental emboli (one with pain on the contralateral side of chest) and one had a questionable diagnosis based on a poor quality CT and intermediate probability V/Q scan. The overall theoretical rate of missed VTE diagnoses using an age-adjusted cutoff in this cohort is similar to that of a doubled D-dimer threshold and greater than the theoretical "tolerable" miss rate of 1.8%. However, further research is justified given the fact that all but one of the PEs identified were questionable diagnoses or sub-segmental emboli and the potential for reduction in CT utilization is significant. Objectives: We explored reasons for asthma-related ED utilization among adult patients. We used a piloted interview guide to conduct semistructured qualitative interviews with a purposive sample of 26 ED asthma patients from June to August 2012, until theme saturation was reached. Interviews were audiotaped and transcribed verbatim. Transcripts and field notes were entered into NVivo 10 and doublecoded, using an iterative process to identify patterns of responses, ensure reliability, examine discrepancies, and achieve consensus through content analysis. Results: Themes that emerged indicate that patients view their asthma symptoms in two categories: those controlled by selfmanagement, and those requiring a provider's attention. Preferred site of asthma care varied across patients. Reasons for ED utilization included: acuity: "When I have very severe shortness of breathÁÁÁ that's when I come to the ED"; insurance status: "I have to come where I know they are going to see me and care for me regardless of if I have insurance or not"; wait time: "If I had gone to the office I probably would have been sitting there longer than I sat in the ER because it's a walk-in and not an appointment"; ED resources/ expertise: "I think I get the best treatment, coming to the ED. They know what you need and attack it right away"; lack of improvement: "If I can basically get rid of it, I'm fine. If not, then it's time to come to the ED"; lack of asthma medication: "I would seek treatment from the ER mainly because I ran out of medications"; inability to access outpatient provider: "I actually tried calling them yesterday for an appointment and they said they would call me back but they never did"; referral by outpatient provider: "When I called my primary care they told me to come to the ER"; referral by friend or family member: "My finance, he told me to come because he didn't like the way I was breathing". Objectives: To create a propensity matching score which corrects for illness acuity in patients given ESA, and then retest the association of ESA with mortality. Methods: Using an existing multicenter sample of 7,940 emergency department patients who underwent testing for acute PE (n=481), we used logistic regression to create a propensity score (propscore). The score was tested for accuracy at predicting ESA using receiver operating characteristic curve (ROC) analysis. The independent predictive effect of ESA was tested using conditional logistic regression, with cases (deaths, 1.3% of sample) and controls (survivors) matching by the propscore, and predictors: ESA, the pulmonary embolism severity index (PESI), respiratory distress, and end-stage condition. Results: The six propscore predictors for ESA (350/7940 or 4.4%) were: PE the most likely diagnosis, unilateral leg swelling, history of PE, active malignancy, O 2 < 94%, and HR>100. The area under the ROC for the propscore was 0.78 (95% CI 0.73-0.80) indicating fair accuracy for predicting ESA. Conditional logistic regression revealed odds ratios (95% CIs) shown in the table for the prediction of death. Conclusion: After propensity matching, the use of ESA has no effect on mortality. The appearance of equipoise justifies the need for a randomized trial. Objectives: To determine the physiologic effects of the PMR position in obese subjects after intense exercise. Null hypothesis: The position would not adversely affect respiratory or cardiovascular function. Methods: An experimental, randomized, cross-over trial in human subjects conducted at a university exercise physiology laboratory. Ten otherwise healthy, obese (BMI >30) subjects performed a period of heavy exertion on a cycling ergometer to 85% of maximum heart rate, and then were placed in one of three positions in random order for 15 minutes: 1) seated with hands behind the back, 2) prone with arms to the sides, 3) PMR position. While in each position, mean arterial blood pressure (MAP), heart rate (HR), minute ventilation (MV), oxygen saturation (O2sat), and end tidal CO2 (etCO2) were measured every 5 minutes. A priori evidence of hypoxia was defined as O2 sat < 94%, and hypoventilation as etCO2 > 45 mmHg. Subjects rested between each of the three exercise/position trials. We compared these parameters between positions with data analysis performed using repeatedmeasures ANOVA and paired t-tests. Results: There were no significant differences identified between the three positions in MAP, HR, MV, or O2sat at any time period. There was a slight increase in heart rate at 15 min in the PMR position over the Prone position (95 vs. 87). There was a decrease in end tidal CO2 at 15 min in the PMR over the Prone position (32mmHg vs. 35mmHg). In addition, there was no evidence of hypoxia or hypoventilation during any of the monitored 15 min position periods. Conclusion: In this small study in obese subjects, there were no clinically significant differences in the cardiovascular and respiratory measures comparing seated, prone, and PMR position following exertion. Objectives: We aimed to determine whether ED physicians routinely select low tidal volume ventilator settings for newly intubated ED patients who develop ARDS. Methods: This study is a retrospective chart review of newly intubated adult patients in a single, urban ED from May 2009 through April 2011. Charts were independently analyzed by two clinician reviewers [MGA, MCS] to identify patients who met criteria for ARDS within 48 hours of presentation to the ED. Both reviewers used a standardized form to assess patient inclusion and were blinded to tidal volumes. Disagreements were resolved by review and discussion among all authors. Patients were included if they were > 18 years of age, were intubated in the ED, and were found to have bilateral infiltrates on an imaging study, a PaO2/FiO2 (P/F) ratio less than or equal to 300 mm Hg, and the absence of heart failure contributing to the respiratory symptoms. Using a kappa statistic we assessed interrater agreement. We then compared the tidal volume set by the ED physician to the recommended setting of 6 mL/kg of predicted body weight. Agreement with recommended settings was determined if the initial tidal volume was less than the 6mL/kg recommendation. Settings within 10 mL of the recommendation were also included. The mean difference and confidence interval between the settings was determined with a t-statistic. We identified 34 patients for inclusion. Kappa for agreement was 0.76 (95% CI: 0.642 to 0.879). Patients ranged in age from 32 to 82 years (mean 56). 47% were male and 53% were female. Severity of ARDS based upon P/F ratios was as follows: 29% with mild ARDS (P/F 201-300), 18% with moderate ARDS (P/F 101-200), and 53% with severe ARDS (P/F 250 lbs, single kidney, on dialysis, kidney transplant, non-English speaker, not able to give consent, or previously enrolled in the study. All ED patients were screened by trained research coordinators for eligibility immediately after triage. Screening was performed daily from 0700 to 2400 hrs during a 10-month period. Data collected included triage complaint, acuity, and time, demographics, time of screening, and total length of stay (LOS). For each screened patient ED crowding was quantified by number of patients waiting to be seen, number of patient-hours in ED, number of admitted patients boarding in the ED, and overall occupancy. Logistic regression analysis of enrollment and ED crowding factors, controlling for triage acuity, screening time, and day of the week was performed. Objectives: To determine the validity of self-reported prescription filling among patients issued prescriptions at emergency department (ED) discharge. We analyzed a subgroup of 1,026 patients enrolled in a randomized controlled trial who were prescribed at least one medication at ED discharge, were covered by Medicaid insurance, and completed a telephone follow-up interview one week after the index ED visit. We extracted all pharmacy and health care utilization claims information from a state Medicaid database for all subjects within 30 days of their index ED visit. We used the pharmacy claims data as the gold standard and evaluated the diagnostic accuracy of self-reported prescription filling obtained during the follow-up interview by estimating its sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). Multivariate logistic regression analyses were conducted to examine whether the accuracy of selfreported prescription filling varied significantly by drug, patient, and health care characteristics. CI when a study sample demonstrates 100% sensitivity. In this case, the most likely population sensitivity is 100%. However, there are other population sensitivities that are most likely to yield a sample sensitivity of 100%. The lowest population sensitivity that is most likely to yield sample sensitivity 100% can be located iteratively with random binomial sampling. The random binomial samples generated using this population sensitivity can then be used in the negative LR bootstrap to generate a 95% CI estimate. CIs were generated and compared for a range of theoretical sample sizes and sensitivities approaching and including 100% using Statxact software, the simple technique of individual extremes, and the bias corrected and accelerated (BCa) bootstrapping technique. Background: Emergency departments and emergency medical services (EMS) are often the front line providers to an outbreak. Advanced notice of an outbreak can not only prepare providers for treating individuals, but can also improve overall population health by treating an outbreak early in its acuity. Objectives: To demonstrate that EMS surveillance data and signal detection methods can provide a surrogate measure of outbreak acuity and that a novel text analytic tool can improve time to detection of an outbreak. Methods: The National Collaborative for Bio-Preparedness retrospectively analyzed North Carolina (NC) EMS records from 04/01/ 09 to 11/30/10 and identified cases of gastrointestinal distress (GID) using chief complaints and other free text fields. The GID cases were plotted across time. A standard method of signal detection (cumulative summation [CUSUM] ) and a new signal detection method (text analytics and proportional charts [TAP]) were applied to the dataset. An outbreak was defined as a case count three standard deviations above the baseline mean for a period of three in five consecutive days. Acuity was defined as the slope of the line determined by the day and case count of the first signal from each method to the peak of an outbreak. The scatterplot shows a case distribution across time that approximates an epidemic curve for an outbreak in NC during the winter 2009/10 (confirmed to be norovirus). TAP generated an alert 76 days prior to the peak of the outbreak on 02/19/10. CUSUM generated an alert 48 days prior to the peak. A total of 11556 and 7627 cases occurred after the TAP and CUSUM signals until the peak of the outbreak, respectively; CUSUM identified 66% (95% CI: 65%,67%) of the cases after the TAP signal. The acuity line associated with the initial TAP signal was less steep (1.0 cases/day), compared to the slope of the line associated with the CUSUM signal (1.4 cases/day). A new signal detection method, TAP, can provide earlier detection of an outbreak compared to a standard method. The acuity slope can represent a relative measure of outbreak intensity from a population perspective. Further study of this measure could help define the relative time sensitivity (leading vs. lagging indication) and potential effect of earlier front line responses. Objectives: To describe the frequency of repeat enrollment within a specific EFIC trial and the effect of repeat enrollments on the analysis of the primary outcome. (RAMPART) was a randomized EFIC trial to determine whether intramuscular midazolam (IM) is noninferior to intravenous lorazepam (IV). Adults and children in status epilepticus after arrival of paramedics were enrolled. The primary outcome was cessation of seizures without rescue therapy upon arrival to the ED. In RAMPART, only the first enrollment for any individual was used in the predefined primary analysis. A secondary analysis of the primary outcome when incorporating repeat enrollments was performed. Outcome and frequency of enrollments were assessed under three scenarios: 1. ignoring within-subject correlation, 2. accounting for the correlation but ignoring treatment crossover (e.g., randomized to IV, then IM during re-enrollment), and 3. accounting for the correlation and excluding cases with treatment crossover. A generalized linear mixed model with treatment group as the factor of interest estimated the intraclass correlation coefficient and standard error of the treatment effect (on logit scale). Results: There were 1023 enrollments among 893 unique individuals of whom 85 accounted for the 130 re-enrollments (64 had 2 enrollments, 13 had 3, and 8 had 4-14). Treatment crossover occurred in 44%. The treatment effect, standard error, and ICC under the 3 scenarios are given in the table. Conclusion: Ignoring the within-subject correlation biases the variance estimate of the treatment effect. In RAMPART the effect would have been minimal since the number of re-enrollments was low and the treatment effect was large, but repeat enrollments should not be treated as independent. Emergency randomized trials should incorporate methods to either eliminate or account for multiple enrollments by individual subjects. Background: Low back pain (LBP) is a common reason for emergency department (ED) visit. Pain and functional outcomes after an ED visit for LBP tend to be poor. ED-based clinically oriented LBP research is hampered by complexity of the available outcome instruments, which can be time-consuming and difficult to administer, both in the ED and during telephone follow-up. Objectives: The purpose of this investigation was to determine if a shorter version of the well-validated and commonly used Roland Morris Disability Questionnaire (RMDQ) would retain the original 24-item instrument's ability to assess functional outcomes accurately in ED patients with LBP. Methods: We used de-identified data obtained from a prospective LBP cohort study, which enrolled 674 patients during an index ED visit for LBP, and followed them by telephone one week and three months later. Five items were selected from the original 24 items of the RMDQ using confirmatory factor analysis. Internal consistency of the abbreviated scale was measured using Cronbach's alpha. The strength of association between the five-item scale (RM5) and the parent scale was determined at baseline, 7 days, and 3 months. We also determined the association between change in the parent scale and change in the RM5. Objectives: We sought to determine if immediate exercise stress echo (IESE) is a useful tool for early triage and safe discharge of lowrisk CP patients in the ED. Methods: Low-risk CP patients (modified TIMI risk score 0 or 1) with a single normal troponin and a non-ischemic EKG who would otherwise have received our hospital's standard observation admission were eligible for study enrollment and IESE. These patients were compared to a cohort of similar low-risk CP patients admitted to our observation unit (OBS group). Study patients with a normal exercise stress echocardiogram (ESE) were discharged directly from the ED, while those with abnormal ESE were admitted and received further testing at the discretion of the admitting physicians. Follow-up was made by telephone at 1 month and 6 months. Clinical cardiac events were defined as MI, death, and any cardiac revascularization. Objectives: We sought to determine the prevalence and outcomes associated with emergency department recidivism in patients after a seminal admission and evaluation in a ED CPU. Methods: Retrospective cohort study of patients admitted to a CPU in a large-volume academic urban ED. Inclusion criteria were age >18 years old, American Heart Association (AHA) low-to-intermediate risk, ECG nondiagnostic for ACS, and a negative initial troponin I. Excluded patients were those age >75 with a history of CAD. Standardized chart abstractions forms were used, charts of all repeat visits were reviewed by two trained abstractors blinded to the study hypothesis, and a random sample of charts was examined for inter-rater reliability. Return visits were categorized as cardiac-ACS, cardiac-nonACS, or noncardiac based on a priori criteria. Social Security Death Index searches were performed on all patients. T-tests and Pearson chi-square tests were used for continuous and categorical comparisons of demographics, cardiac comorbidities, and risk scores. Results: 2141 patients were enrolled. Mean age was 52.6 AE 12, 55% were female for the cohort. Mean age for return patients was 52.8 AE 12.4, 56% were female. The mean TIMI score was 0.5AE0.8 and 0.6AE0.8, respectively. 36.7% of CPU patients returned to the ED within one year vs. 5.4% of all ED patients (p<0.01). The median number of subsequent ED visits was 3 (IQR 1-5). 1.7% had a 365-day return secondary to a cardiac cause, while the 365-day MACE rate was 0.7% (95%CI 0.4-1%). Of the return visits, 25% (95%CI 23.6-26.9) were for chest pain. Patients with return visits had a lower stress utilization rate on their index CPU visit than non-returning patients (p<0.01), but only an additional 5% were stressed on return visits. Patients evaluated in an ED CPU have a very low rate of major adverse cardiac events at one year. However, these same patients have an ED recidivism rate more than six times the general ED population. Further research is necessary into those factors that better address the needs of this patient population with high ED utilization. Objectives: We aimed to assess the yield of provocative testing in our emergency department (ED) based chest pain unit. We conducted a prospective observational study of patients evaluated for possible ACS in an ED-based chest pain unit between 2004 and 2010 at an urban academic tertiary care center. Patients with symptoms of possible ACS and without an ischemic electrocardiogram or positive biomarker were enrolled. All patients were evaluated by provocative testing for coronary ischemia and those with positive results on provocative testing were evaluated by cardiologists to determine the appropriateness of further invasive testing versus medical management. Demographic and clinical features, results of diagnostic testing, and invasive therapeutic interventions were recorded. Diagnostic yield (true positive rate) was calculated and therapeutic yield was assessed through blinded, structured chart review using AHA designations for potential benefit from percutaneous intervention (class I, IIa, IIb or lower). Results: 4181 patients without a history of CAD were enrolled. Chest pain was a presenting complaint in 94%, most were intermediate-risk (73%), and 38% of the cohort was male. Provocative testing was positive for coronary ischemia in 470 (13%), of whom 123 went on to coronary angiography. Obstructive disease was confirmed in 63/123 (51% true positive), and 28 (0.7% overall) had findings consistent with potential benefit from revascularization (class I or IIa). Overall, 57 patients (1.4% of the cohort) received revascularization procedures (49 PCI, 8 CABG). Conclusion: In our cohort of patients evaluated for ACS in an ED-based chest pain unit, provocative testing generated a very small therapeutic yield, while diagnostic yield was as often falsely positive as truly positive. Conclusion: As a dichotomized risk-stratification tool in the EDOU, the CARdiac score was comparable to the TIMI score in predicting primary outcomes and inpatient admission. The advantage of the CARdiac score for EDOU risk stratification may lie in its ease of recall and calculation protocols typically utilize three troponin tests spaced six hours apart to rule out acute coronary syndrome. These protocols may require three negative troponin results prior to allowing patients to undergo stress testing. Objectives: We evaluated the utility of a third troponin in the EDOU evaluation of patients with chest pain. We performed a prospective, observational study of all chest pain patients placed in our EDOU over the three-year period from June 1, 2009, through May 31, 2012. We recorded baseline data, outcomes related to the EDOU stay, results of laboratory testing, and inpatient admission. Our laboratory utilizes high-sensitivity troponin I, with results less than 0.05 ng/mL considered negative, 0.05-0.49 classified as equivocal, and 0.50 or greater considered positive. Patients were required to have negative or equivocal troponins in the ED prior to placement in the EDOU. We evaluated patients who had three troponin tests performed as part of their ED and EDOU evaluations of chest pain. We focused specifically on those whose first two troponin results were negative to determine the number with a positive or equivocal third troponin. Results: 1276 patients were evaluated for chest pain in our EDOU during the three-year study period. The average age was 54.1 years (18-92 years) and 46% were male. Of the 1276 patients evaluated, 1128 (88.4%) had three troponin tests. Of these 1128 patients, none of those who had two negative troponins had positive third troponins. Twelve patients (1.1%) had equivocal (0.05-0.49) third troponins after the first two were negative. Ten of these 12 patients were discharged without additional testing. Two of these 12 patients underwent coronary angiography, of whom one had stent placement and the other required no intervention. Conclusion: Among EDOU patients evaluated for chest pain, we had no cases of a positive third troponin after negative initial troponin testing and had very few cases of equivocal results for the third troponin. In two cases the equivocal third troponin led to additional testing and resulted in one patient undergoing stent placement. While our results do not suggest lack of utility in obtaining a third troponin in the EDOU, it would not seem necessary to delay stress testing to obtain this test. typically perform routine cardiac stress testing or coronary computed tomography (CT) to rule out ischemic cardiac chest pain. Some have recently called into question the utility of routine stress testing and advanced anatomic imaging in the low-risk patient with chest pain. Objectives: We evaluated the rate of false-positive stress testing and coronary CT in patients admitted to our EDOU for the evaluation of chest pain. We performed a prospective, observational study of all chest pain patients placed in our EDOU over the three-year period from June 1, 2009, through May 31, 2012. We recorded baseline data, outcomes related to the EDOU stay, and results of testing. We reviewed all patients who underwent stress testing or coronary CT prior to cardiac catheterization to determine false-positive rates of these studies. Stress tests were typically treadmill echocardiogram stress tests and were considered positive if a "positive" or "equivocal" interpretation by the reviewing cardiologist prompted cardiac catheterization. Coronary CT which led to subsequent cardiac catheterization was considered a positive test. Cardiac catheterization which resulted in stent or coronary artery bypass graft (CABG) was considered positive. Objectives: The present study examined the relationship between causal attribution, perceived illness severity, and smoking stage of change in a sample of emergency department patients (N= 242). Methods: Perceived Illness Severity was measured at two time anchors: 1) when patients first notice their symptoms, and 2) when patients first come to the hospital. Smoking stage of change was measured ordinally (precontemplation -no interest in change, contemplation -intends to change in the next 6 months, and preparation -intends to change in the next 30 days). Readiness to change was also measured on a continuous five-point Likert scale. Interactions between causal attribution and perceived illness severity were also examined in relation to stage and readiness to change. Results: One-way ANOVAs revealed that participants who were in the preparation (M = 3.17, 95% CI 0.22, 0.71) and contemplation (M = 3.01, 95% CI 0.09, 0.49) stages were more likely to attribute their current illness to smoking than those in precontemplation (M = 2.71, 95% CI -0.49, -0.09). In addition, while perceived illness was not related to stage of change by itself, the interaction between causal attribution and perceived illness severity was strongly related to readiness to change for both time anchor 1 (F(2, 238) = 7, p =0.001 and time anchor 2 (F(2, 238) = 9, p < 0.001). To validate our findings, we reanalyzed the data using participants' readiness to change using the five-point Likert rating. Readiness to change was related to causal attribution, r(241) = 0.369, p<0.01, perceived illness severity at time 1 r(240) = 0.142, p< 0.05 and time 2, r(240) = 0.194, p <0.01, and causal attributionXpercieved illness severity interaction terms, time 1 (r(241) = 0.306, p<0.01) and time 2 (r(240) = 0.340, p<0.01). Conclusion: Participants' who connected their illness to smoking and who perceived their illness to be serious were much more likely to intend to quit smoking. Two kinds of interventions are needed: interventions that help translate these intentions into lasting behavior change, and interventions that help to increase awareness of causal attribution and perceived severity in those who have smoking-related illnesses but who do not recognize it. Background: ED visits for suicidal ideation (SI) may be an opportunity for suicide prevention. An understanding of provider beliefs and behavior is important to inform efforts to improve ED-based assessment and treatment of suicidal patients. Assessment and Follow-Up Evaluation (ED-SAFE) study, we sought to examine the knowledge, attitudes, and practices of ED providers concerning suicidal patient care and to identify characteristics associated with screening for SI. Methods: This was an observational, cross-sectional survey of physicians and nurses working at eight EDs in seven states; 631 providers completed the voluntary, anonymous survey (79% response rate) between June, 2010, and March, 2011 (~12 weeks per site). Results: The median participant age was 35 (IQR 30-44) years and 57% were female. Half (48%) were nurses and half were attending (22%) or resident (30%) physicians. More expressed confidence in SI screening skills (81-91%) than in skills to assess risk severity (64-70%), counsel patients (46-56%), or create safety plans (23-40%), with some differences between providers. Few thought mental health provider staffing was almost always sufficient (6-20%) or that suicidal patient treatment was almost always a top ED priority (15-21% Conclusion: ED providers reported confidence in suicide screening skills but gaps in further assessment, counseling, or referral skills. Efforts to promote better identification of suicidal patients should be accompanied by a commensurate effort to improve risk assessment and management skills, along with improved access to mental health specialists. Predicting Background: Alcohol intoxication is a behavioral emergency with a relatively transient course yet significant symptom overlap with other, more enduring psychopathologies. Given the manifest similarities yet wide prognostic variability between alcohol intoxication and other nonsubstance induced conditions, the appropriateness of applying a psychiatric hold to any one case presents a significant challenge for emergency physicians (EPs). Objectives: A retrospective chart review was performed in order to examine the extent to which ED patient blood alcohol levels (BAL) predict whether EP-issued psychiatric holds are continued or rescinded by attending psychiatrists. Objectives: This study describes the rates of eating disorders in adult patients who present to the emergency department (ED) for medical care and examines the relationship between eating disorders, depression, and substance abuse in these patients. Methods: Emergency department patients aged 21-65 years (n=1795) completed a computerized questionnaire that included validated screening tools for each of the variables in question. Analyses were conducted comparing individuals who screened positive for an eating disorder with those who did not based on demographics (sex, age, race, income, education), body mass index (BMI), risky drinking behavior, other substance use, and depression. This study was supported by the National Institute on Alcohol Abuse and Alcoholism, grant # R01AA018659. Results: Nearly sixteen percent (15.9%) of all patients screened positive for an eating disorder regardless of their reason for presenting to the ED including 9.3% (63/668) of all males and 19.8% (223/1127) of all females screened. Obesity (BMI > 30) was reported in 37.7% of all patients. Patients who screened positive for an eating disorder were significantly more likely to be obese than those who did not (OR 2.68, CI 1.98-3.62, p<0.001). These patients were also more likely to screen positive for depression (OR 3.19, CI 2.28-4.47, p<0.001) and to be female (OR 2.37, 1.76, 3.19) . No differences in rates of eating disorders were seen across racial groups, level of education or income, or for any of the included substance abuse variables. Conclusion: Eating disorders are common among adult emergency department patients and are associated with high rates of comorbid obesity and depression. Given the significant morbidity and mortality associated with each of these conditions, targeted screening in the ED may be warranted. with co-occurring substance abuse (SA). An ED visit for SI may be due to lack of access to outpatient mental health services. Objectives: To describe and compare rates of prior mental health services utilization (MHSU) among suicidal ED patients with and without co-occurring SA. Methods: A pilot-tested computerized survey using previously validated items was administered to a convenience sample of eligible subjects (age 18 or older, English speaking, sober, medically cleared for psychiatric evaluation, under voluntary treatment) undergoing assessment for SI in the psychiatric unit of an urban tertiary academic ED. Subjects were asked about their SA status (current SA in last 3 months vs. never or sober at least 3 months), prior MHSU over the previous 12 months (psychiatric hospital stays, outpatient therapy including SA treatment, psychotropic medications), and sociodemographic characteristics (age, sex, race/ethnicity, education, homelessness, insurance, employment). Bivariate and multivariate logistic regression analyses were performed to identify the relationship between SA status and prior MHSU, controlling for potential confounders. Results: Ninety Methods: A retrospective chart review was conducted for patients evaluated as trauma activations or admitted to the trauma service from the ED between 1/2008 and 12/2011. Data were obtained from the trauma registry of a community teaching hospital with Level I trauma accreditation. Pediatric patients, pregnant patients, those dead on arrival or that died in the trauma resuscitation area (TRA), and transfers from an outside facility were excluded. Patients were stratified by the first available SBP into three groups: <90, 90-105, and >105 mm Hg. Data were collected for the primary outcomes, patient demographics, injury severity score (ISS), and mechanism of injury. Data were analyzed with SPSS software using chi-square and Kruskall Wallis tests. Results: Of 8412 patients identified in the institution trauma registry, 1837 were excluded, and 6575 were analyzed for outcomes. 91.1% of the injuries were from blunt mechanisms and the mean ISS was 8.5. Comparison of the three SBP groups revealed statistically significant overall differences for mortality, immediate surgical intervention, ICU LOS, hospital LOS, and transfusion in the TRA (see table) . Between group analyses comparing the three groups showed statistically significant differences for all measured outcomes. We found an increase in all measured outcomes as the initial SBP decreased among the three groups. While a precise value was not determined, our data suggests a higher SBP cut-off for hypotension would potentially identify trauma patients with worse outcomes. Objectives: The purpose of the study was to validate the findings that ETCO2 can be used as a non-invasive indicator of shock. The question being asked: does lactate correlate to specific levels of ETCO2 and vital signs (sBP, HR, RR)? Secondarily, is ETCO2 a better noninvasive marker than vital signs for a measure of shock in trauma patients? Methods: Design: Prospective observational cohort validation study. Setting: Urban tertiary care Level I trauma center. Subjects: This study took place from 7/1/12 to 11/16/12 in which any person suffering from trauma presented. Observations: Baseline ETCO2 was recorded from nasal cannula along with vital signs (sBP, HR, RR), and ABG (measuring lactate, pH, base excess). Pearson correlation (R) and correlation coefficient (r 2 ) were calculated. Likelihood ratio (+) and (-) were calculated from contingency tables using lactate as the gold standard measure of shock. Objectives: The purpose of this study was to determine if a correlation was present between PoC BE and lactate. Furthermore, to determine its usefulness as a prognostic variable to determine which patients required greater resuscitation. We hypothesize that there will be a strong correlation and use for PoC BE in the ED. Methods: Design: Prospective observational cohort study. Setting: Level I academic, urban trauma center. Participants: This study took place from July 1, 2012 to November 16, 2012. Inclusion criteria were any patient >18 years old who presented with trauma. Exclusion criteria were any patients <18 years old, or in traumatic arrest. Observations: baseline demographics (age, sex, mechanism of injury), initial vital signs, PoC BE, and lactate were obtained. Massive transfusion protocol was used as the gold standard measure to determine test characteristics of BE and lactate as these patients were deemed to be in shock. Background: Both glial fibrillary acidic protein (GFAP) and S100B are found in glial cells and are released into serum following a traumatic brain injury (TBI). S100B has been extensively studied but its clinical utility remains controversial because of its release from bone and soft tissues. Objectives: This study examined the ability of GFAP and S100B to detect intracranial lesions on CT in trauma patients with and without TBI and to assess their performance in the presence of fractures. Methods: This prospective cohort study enrolled a convenience sample of adult trauma patients at a Level I trauma center with and without mild to moderate TBI (MMTBI). Patients with a MMTBI had blunt head trauma followed by loss of consciousness, amnesia, or disorientation and a GCS 9-15. Non-TBI trauma included orthopedic and soft tissue injuries. Serum samples were obtained from each patient within 4 hours of injury and measured by ELISA for GFAP and S100B (ng/ml). The primary outcome was the presence of traumatic intracranial lesions on CT scan (CT+). For patients in whom CT was not ordered, a telephone follow-up was conducted to assess CT status. The secondary outcome was the presence of extremity fractures. Biomarker performance was assessed using area under the ROC curve (AUC, 95% CI). Results: There were 180 patients enrolled, 119 (66%) had a MMTBI (116 with GCS 13-15, 3 with GCS 9-12) and 61 (34%) had trauma without MMTBI. Median age was 40 years (range 18-83) and 106 (59%) were male. The proportion of CT+ in MMTBI was 13%, CT+ in trauma without MMTBI was 0%, and overall it was 8%. There were 56 (31%) patients with extremity fractures. In MMTBI patients the AUC for CT+ for GFAP was 0.85 (0.68-1.00) and for S100B 0.80 (0.64-0.96). In MMTBI patients with extremity fractures the AUC for GFAP was 0.94 (0.82-1.00) and for S100B was 0.66 (0.46-0.87). In all patients (with and without MMTBI), the AUC for CT+ for GFAP was 0.88 (0.68-1.00) and for S100B 0.83 (0.69-0.98). In all patients with extremity fractures the AUC for CT+ for GFAP was 0.96 (0-1.00) and for S100B was 0.76 (0.62-0.91). Conclusion: The performance of GFAP for detecting CT lesions was very good regardless of the presence of extremity fractures. However, the performance of S100B was poor in the presence of fractures. In both a general trauma and MMTBI population, GFAP outperformed S100B in detecting intracranial CT lesions. Validation is ongoing. Predictive Objectives: To evaluate the ability of age-specific prehospital physiologic criteria to predict serious injury among older adults and to assess the effect of these revised criteria on triage accuracy compared to current practices. Methods: This was a retrospective cohort study of injured adults >= 55 years transported by 94 EMS agencies to 122 hospitals (trauma and non-trauma) in seven regions of the Western U.S. from 2006-2008. EMS records were probabilistically linked to hospital data from trauma registries, ED databases, and state discharge databases. "Serious injury" was defined as an Injury Severity Score (ISS) >= 16 (primary outcome). We evaluated linear and non-linear covariates for GCS, sBP, respiratory rate, heart rate, and shock index in multivariable logistic regression models, unadjusted and adjusted for 12 prehospital confounders. We then used classification and regression tree analysis to assess the relative importance of each physiologic criterion and descriptive statistics to estimate changes in triage sensitivity and specificity. Results: 44,890 injured older adults were evaluated and transported by EMS over the 3-year period, of whom 2,328 (5.2%) had serious injuries. Non-linear associations existed between all physiologic measures and serious injury (p <0.001 for all, unadjusted and adjusted), except for heart rate. The most important, age-specific physiologic criteria (in order) were: GCS <= 14; assisted ventilation (BVM or intubation); respiratory rate < 10 or > 20 breaths/minute; and SBP < 110 or > 180 mmHg. Compared to current triage practices, the revised physiologic criteria would increase triage sensitivity from 79.8% to 90.1% (absolute difference 10.4%, 95% CI 9.2% -11.7%), while reducing specificity from 75.5% to 50.4% and approximately doubling the number of triage-positive patients without serious injuries (over-triage) from 10,416 to 21,100. Conclusion: Existing prehospital physiologic triage criteria could be revised to better identify seriously injured older adults and reduce under-triage at the expense of over-triage to major trauma centers. Objectives: We hypothesize that sonographic features of pediatric abscesses are different from those of adult abscesses. Methods: A retrospective observational trial of skin and soft tissue abscesses. Pediatric patients (0-17 years) with suspected abscesses presenting from 2008 to 2012 were imaged using ultrasound. Images were digitally recorded and blindly reviewed by an experienced sonographer to determine the presence or absence of pre-determined image characteristics. A random subset of images was re-reviewed by a second experienced sonographer blinded to the initial review. Patient demographics and image characteristics were recorded. A kappa analysis was performed to determine the agreement for each image characteristic. For comparison, 142 adult patients with abscesses were randomly chosen from an existing image database from the same time period using a random number generator. Comparison between groups (adult and pediatric) was performed using a comparison of 95% confidence intervals using the modified Wald method. Results: 325 pediatric patients underwent ultrasound for a suspected abscesses (142 positive for abscess). 284 patients (142 pediatric and 142 adult) were included in the final analysis. There was no difference between groups for female sex (51% vs 49%), but abscess volume was significantly smaller in children ( Methods: Prospective randomized trial comparing hands on TT to SS training. During the 2012 Chicago NATO summit, a TUS workshop was provided for a visiting DRT. All 36 DRT members consented to participation. Subjects first completed a validated written test followed by SS video didactics. Subjects were stratified by provider role and randomized into three groups for hands-on training. Group 1 completed TT and SS, Group 2 completed only SS, Group 3 completed only TT. Next, each group completed a practical assessment of TUS image acquisition on live models and interpretation of actual case images. Then, subjects completed the same written test. The main study outcome was performance on the practical assessments. T-test analyses were used to compare pre-and post-test scores. ANOVA analyses were performed to examine group effect on differences in practical assessments and pre-and post-test scores. Results: Among subjects, 25% were nurses, 8% nurse practitioners, 3% residents, 28% physicians, 3% EMTAs, 30% EMTBs, and 3% pharmacists. Provider roles were equally distributed among groups. The mean pre-test score was 17% (0-70%) and the mean post-test score was 54% (36-84%)(p < 0.001). Although a statistically significant increase was observed from pre-to post-test scores for all subjects, there was no significant group effect. The mean practical image acquisition score was 3.3 (scale: 1 totally inadequate, 3 just adequate, 5 textbook images obtained). The mean correct image interpretation score was 78%. There was no statistically significant group effect on practical assessment scores. Conclusion: After hands-on training with either TT or the SS, DRT members were able to adequately acquire and interpret TUS images. Based on these initial results from our small sample, the SS may provide an effective, logistically simple, portable training option warranting further larger scale study. Objectives: To evaluate the effect of different liquid gastric decontamination adjuncts on examiners' ability to identify the presence and quantity of tablets using POCUS in a simulated massive OD. Methods: This prospective, blinded, pilot study was performed at an academic emergency department using volunteer resident and staff EPs trained in POCUS. Five black, opaque bags were prepared with the following contents: 1 liter (L) of water, 1 L of water with regular aspirin (ASA), 1 L of water with 50 enteric-coated aspirin tablets (ECA), 1 L of polyethylene glycol (PEG) with 50 ECA, and 1 L of activated charcoal (AC) with 50 ECA. Participants performed POCUS on each bag using a 10-5 MHz linear transducer and completed a standardized questionnaire: (1) Were pills present? YES/NO; (2) If tablets were identified, estimate the number (1-10, 11-25, >25) . A single test on proportions utilizing the binomial distribution was used to determine if the number of EPs who identified tablets differed from 50% chance. For those tablets identified in the different solutions, another test on proportions was used to determine whether the type of solution made a difference. Since three options were available, a probability of 33.3% was used. Results: Thirty-seven EPs completed the study. All EPs were able to identify ECA in water and PEG but only 20 identified tablets in AC, and no EP identified regular ASA in water (Table 1) . Of those who identified tablets, less than one-third identified the correct amount ( Conclusion: Non-specialist physicians practicing in the developing world can be trained to reliably produce and interpret CPUS images. Point-of-care CPUS is a valuable tool in differentiating the cause of dyspnea in low resource settings, which may be vastly different than those in the developed world, with a higher prevalence of undiagnosed heart failure, valvular disease, and effusions than anticipated. Objectives: To assess the number of proctored FAST exams necessary for the novice sonographer to accurately acquire the four views of the exam. Methods: This was a prospective educational intervention study of FAST exam mastery by novice third and fourth year medical students (MS). Students were excluded if they had formal US training or prior experience with FAST. All students received a two-hour online didactic course on basic ultrasound and FAST. Students were randomized into one of three groups. Group 1, students performed five proctored exams, group 2, 10 exams, and group 3, 15 exams. Proctored exams were designed to give the students hands-on practice under the guidance of trained ultrasonographers. The proctored exams were administered monthly and limited to 10 minutes to standardize the training. At the end of each month students were tested on the FAST exam with the same two standardized patients, either a male 8 years old (BMI=16.5) (66th percentile), or female 12 years old (BMI=18.8) (60th percentile). Students had two minutes to perform the test exam. The test exams were recorded using video and later reviewed and graded by examiners blinded (to groups) using a standardized data scoring sheet. To pass, the students were required to obtain the standard views of the organs and/or structures necessary to identify free fluid. Results: Fourty five students, 23 third year and 22 fourth year, ages 24-43 years were enrolled. Groups were evenly matched for year, age, and sex. Pass rates were lowest for Group 1 (n=15) 6.7% (95% CI, 0.0% -31.8%) and significantly (p<0.05) higher for Group 2 (n=15) 60% (95% CI, 35.7% -80.2%) and Group 3 (n=15) 86.7% (95% CI, 60.9% -97.5%). The majority of failures in Group 1 were secondary to inability to complete the exam in the allotted time, followed by difficulty in identifying the splenorenal interface. Conclusion: An online course and proctored exams provides students with the skills to perform FAST accurately on children. Five proctored exams is insufficient training for novice sonographers to master FAST. Differentiating the added effect of increasing the number of proctored exams from 10 to 15 will depend upon future enrollment. Methods: Thirty-three swine (45-55 kg) were intubated, anesthetized, and instrumented (continuous MAP, cardiac output (CO) monitoring). IO catheters were placed and confirmed with bedside video fluoroscopy. Anesthesia was adjusted to allow for spontaneous breathing with an FiO2 of 21%. CN was infused until apnea occurred and lasted for 1 min (Time Zero). Animals were then randomly assigned to IV COB (12.5 mg/kg), IO COB (12.5 mg/kg), or saline in equal volumes (<10 ml) and monitored for 60 min. Doses and volumes of the medications were based on previous studies. A sample size of 11 animals per group was based on obtaining a power of 80%, an alpha of 0.05, and a standard deviation of 0.17 in mean time to resumption of spontaneous breathing based on previous research. Time to spontaneous breathing and survival were compared using rank methods. Lactate, pH, CO, MAP, respiratory rate (RR), and minute ventilation time-curves were compared utilizing RMANOVA. . Baseline weights (53, 51, 51 kg), time to apnea (10:54, 10:07, 9:49 min), and CN dose at apnea (1.8, 1.7, 1.7 mg/kg) were similar. At Time Zero, mean CN blood (1.7, 1.7, 1.84 mcg/ml) and lactate levels (3.5, 3.5, 3.1 mmol/L), and reduction in MAP from baseline (29%, 28%, 36% decrease) were similar. Two of 11 animals in the saline group survived as compared to ten of 11 in both the IV and the IO COB groups (p<0.001 Background: The normal myocardium preferentially metabolizes fatty acids for energy production but in shock states, the myocardium switches away from fatty acids to glucose metabolism. L-carnitine (CAR) is a key component in free fatty acid metabolism. Our prior work showed that 50 mg/kg of CAR improved survival and hemodynamic parameters in a murine model of verapamil (VER) toxicity. Upon increasing the degree of VER toxicity, we did not find an improvement in survival with the same dose of CAR. Objectives: The primary objective was to determine if increasing the dosage of CAR increases survival in a model with a higher degree of VER toxicity. The secondary objective was to determine CAR's effect on hemodynamics in this model. This was a controlled, blinded, randomized animal study utilizing 20 male Sprague-Dawley rats. All animals were anesthetized and ventilated with isoflurane and instrumented to record heart rate (HR) and mean arterial pressures (MAP). To achieve VER toxicity, all animals received a constant infusion of 10 mg/kg/hr of VER starting at time zero. Five minutes after the start of the VER infusion, animals were randomized to receive either 100 mg/kg CAR or an equal volume of normal saline (NS). The primary endpoint was survival time with death defined as pulseless electrical activity or asystole. Secondary endpoints were heart rate (HR) and mean arterial pressure (MAP). The animals were observed for a total of 150 minutes. Data were analyzed using Kaplan Meier time to survival event analysis and ANOVA with post hoc testing. A pretest sample size calculation determined that 10 animals per group were needed to detect a 50% difference in survival time. Objectives: To determine the effect of nAChR antagonism on hemodynamics and NMJ structure in a 24-hour critical care swine model of parathion poisoning. Methods: Minipigs were intubated, ventilated, and instrumented with arterial and venous lines. At time zero and every 4 hrs a muscle biopsy was taken and frozen in isopentane. Arterial blood to quantitate parathion, metabolites, and lactate was taken hourly. At t=0 pigs were given 4x the rat IV LD50 of parathion. To mimic a clinical scenario, when mean arterial pressure (MAP) reached 55 mmHg, bolus doses of atropine, 2-PAM, and diazepam were given, followed by an infusion of atropine and intermittent doses of 2-PAM and diazepam. Norepinephrine was titrated to maintain a MAP > 55 mmHg. Animals were randomly assigned to receive either rocuronium (roc) 2.5 mg/kg IM every hour (n=3) or saline placebo (n=3). Clinical NMJ function was determined every 30 minutes by acceleromyography. Muscle samples were stained with bungarotoxin to visualize nAChR. nAChR dispersion (a quantitative measure of functional nAChR) was performed blinded to treatment allocation. Animals were euthanized 24 hours after poisoning. Results: All animals survived to 24 hours. There was no difference in the amount of norepinephrine required to maintain MAP >55mm Hg between the roc and control groups (0.181 mcg/kg vs 0.328 mcg/kg, respectively, p=0.5). Pigs that did not receive roc demonstrated a statistically significant decrease in NMJ structure between t=0 and t=24 hours (Fig 1. *=p<0.05, **=p<0.01). Animals that received roc demonstrated preservation of NMJ structure between t=0 and t=24 hours (Fig 2) with no change in NMJ dispersion. In this realistic swine model of severe parathion poisoning, comprehensive treatment combined with the nAChR antagonist rocuronium resulted in preservation of NMJ structure. Further research examining the clinical effects of such NMJ preservation in long-term animal models is urgently needed. Methods: This was a two-part investigation. In part 1, we used the Dixon Up and Down method to determine an LD50 dose of oral verapamil which will cause death within 2 hours of administration. Part 2 was a randomized controlled investigation using 20 rats. Each rat was anesthetized, ventilated, and instrumented with continuous blood pressure and heart rate monitors. After instrumentation, each rat was given a single LD50 dose of verapamil via an orogastric tube. Five minutes after VER administration, the rats were randomized into two groups (n=10/group). They received either 20% IFE or an equivalent volume of normal saline as 6 ml/kg boluses every 2.5 minutes for three total boluses. Animals were observed for a total of 2 hours. The primary endpoint was survival. Secondary endpoints were mean arterial pressure (MAP) and heart rate (HR). Data were analyzed with Kaplan-Meier time to event analysis and compared using the log rank test and ANOVA with post hoc testing. A pre-test sample size calculation determined that 10 animals per group were needed to detect a 50% difference in survival time. Objectives: To compare ILE and normal saline (NS) in a rat model of cocaine toxicity. The primary outcome was mortality, and secondary outcome was the effect on cocaine-induced mean arterial pressure (MAP) changes. We hypothesized that cocaine would decrease mortality and MAP changes compared to saline. Methods: Twenty pre-catheterized male Sprague Dawley rats were sedated. Ten animals received ILE and ten received NS in the same 15 ml/kg dose; this was followed by a 10 mg/kg bolus of intravenous cocaine. Continuous monitoring included intra-arterial blood pressure, heart rate, and electrocardiogram tracing. End points included a sustained undetectable MAP or return to baseline for 5 minutes. Fischer's exact test was used to compare mortality, and two tailed ttests were used to compare the groups on a number of physiologic variables. Results: In the NS group, 7/10 animals died, compared to 3/10 in the ILE group. Mortality was significantly higher in the saline group (p = 0.03). Between groups, there were no differences in baseline MAP, maximum MAP reached, or time to maximum/minimum MAP after cocaine exposure. The NS group reached a significantly lower MAP than the ILE group after cocaine exposure (p=0.001). hypotensive effects compared with NS in this rat model. ILE is a relatively inexpensive and safe treatment and, if proven in humans, its use would represent a significant improvement over current therapy for severe cocaine intoxication. ILE should be investigated further as a potentially life-saving adjunct in the treatment of severe cocaine toxicity in humans. S120 2013 SAEM abstracts Background: Prior to antivenin development, the mortality rate in US coral snake bites was reported to be 10-20%. Once the Wyeth North American Coral Snake Antivenin became available, use in all bites was recommended due to the neurotoxic nature of the venom, with no treatment failures reported. Prior to Wyeth halting production in 2003, a 5-year supply was generated at the FDA's request. The final lot expired in 2008, but the FDA has periodically extended the expiration date of the dwindling supply. There has been no national study of coral snake bites since the antivenin stock became limited. Objectives: To describe coral snake bite outcomes and evaluate antivenin use by time period. Methods: DESIGN -This is a retrospective analysis of a prospectively collected cohort utilizing a nested case-control design. Objectives: To compare the efficacy of narrative versus summary (standard) content in promoting recall of guideline recommendations regarding opioid prescribing. Methods: We conducted a mixed methods, randomized controlled dissemination experiment. Two recommendations were selected from the opioid guideline. Using content analysis, we coded this summary passage and identified six themes. From these themes, we constructed a fictional narrative that matched the summary in total word count and word count of individual themes. At a regional conference of emergency physicians and residents, the attendees were randomized by seat to receive either the summary passage (control) or narrative (intervention). Non-physicians were excluded. To assess recall, we used a modified free list elicitation technique. Participants read the passages individually. One hour later, participants listed all the details that they could recall. The written responses were scored by two independent reviewers. For each response, either the presence or absence of the six themes was recorded using strict criteria established prior to review. A third reviewer adjudicated discrepancies. The proportion of responses that recalled each theme was determined, and the data were analyzed using logistic regression and chi-square tests. A sample size of 70 participants was required to detect 40% differences in proportion, with power of 0.9 and type I error of 0.05. Results: Ninety-five surveys were distributed with a response rate of 86%. The two reviewers had 97% agreement after initial assessment. For each theme, inter-rater reliability was calculated with j ranging from 0.76 to 1.00. For three themes, there were statistically significant improvements in recall in the narrative arm (see table) . For one theme, recall was greater in the summary arm. Conclusion: Physicians exposed to a narrative were more likely to recall guideline recommendations than those exposed to a standard summary. Dissemination strategies that incorporate narratives may improve the adoption of clinical evidence. Methods: This prospective cohort study was conducted at two urban university tertiary care hospitals. Patients >18 years with acute drug overdose were enrolled from the ED over 3 years. Excluded were patients with alternate diagnoses, anaphylaxis, chronic drug toxicity, and missing outcome data. ED clinical data included demographics, exposure intent, ECG intervals, vital signs, laboratory chemistries, altered mental status (GCS <15 or coma/agitation/delirium), and prior cardiac disease (coronary disease or congestive heart failure). Inhospital ACVE was defined as any of: 1) myocardial necrosis (elevated troponin), 2) shock (hypotension requiring vasopressors), 3) ventricular tachycardia or fibrillation (VTVF), or 4) cardiac arrest (no pulses or chest compressions). Analysis included univariate factor analysis and multivariable logistic regression with test characteristics of the derived model. Objectives: To determine the effect of a hypothesis listing and justification rule upon CR performance in post end-of-second year medical students. Methods: This study qualified for IRB exemption and was conducted at the learning resource center of a major traditional medical school. Using an experimental, pre-test/post-test, control group design, ninetyseven of 191 post end-of-second year medical students were randomly assigned to justification (JUST) or no justification (NOJUST) treatments. The JUST treatment required students to periodically list their most likely diagnostic hypotheses, and to justify their listing with supporting data. Students worked through six computer simulations: one pre-test, three practice simulations with the JUST or NOJUST treatment, and two post-tests. Multivariate and repeated-measures ANOVA were used to compare treatment groups along nineteen previously validated CR dependent measures. Results: Complete data were obtained for 81 medical students (JUST n=39; NOJUST n=42) who did not differ from the remainder of the medical school class. No pre-test differences were noted between JUST and NOJUST groups. No significant CR pre-test/post-test gains were noted between JUST and NOJUST groups, EXCEPT JUST students acquired greater physical examination proficiency (p = .020). JUST students perceived that the hypothesis listing and justification rule facilitated their problem solving success, forced them to consider supporting data, made solving simulations easier, and helped them to organize their data collection. Conclusion: Using a hypothesis listing and justification rule throughout the problem solving process guides physical examination proficiency more than any other data-gathering or clinical decision making measure in post end-of-second year medical students. Background: There has been significant growth in the use of human patient simulation (HPS) in emergency medicine (EM) residency training. While HPS is well developed as an educational method, there is less known about its efficacy as an assessment tool. Objectives: The goal of this project was to assess the inter-observer reliability of a checklist tool compared to Dreyfus five-level assessment of performance using both direct faculty observation and delayed video observation of simulated cases in sepsis and cardiogenic shock resuscitation. Emergency Medicine Residency at BWH/MGH completed the cases with two faculty observers present to assess the residents using a checklist as well as gestalt assessment using the Dreyfus five-level scale (score of 3 was considered passing). Direct feedback was given to the resident at the completion of each case. The performances were also reviewed on video by a third faculty member and given the gestalt rating only. Results: Ninety-two percent of residents passed the sepsis case, and 75% of residents passed the cardiogenic shock case as rated by the observing faculty. Inter-observer agreement was demonstrated with many of the checklist items. Residents who were rated as passing the sepsis case missed an average of 3 checklist items, while those who did not pass missed an average of 5 items; for the cardiogenic shock case the values were 2.5 and 3.6, respectively. Using a passing score of ! 3, there was 92% inter-observer agreement (K uncalc) on passing the septic shock case and 83% (K = 0.62) on passing the cardiogenic shock case in the direct observations. However, using the exact value of the five-level score demonstrated only 18% and 58% agreement, respectively. Delayed video review demonstrated 91% agreement (K = 0.62) with direct observation for passing the sepsis case and 75% (K = 0.5) for passing the cardiogenic shock case. Residents rated the experience as both fair and helpful. Resident evaluation of the session included frequent mention of the value of one on one feedback with the attending physician after each case. Conclusion: While there was good inter-rater agreement in global assessment of passing the cases for both direct observation and delayed video review, there was poor correlation in the specific score given. Background: ACGME requires residency programs to assess residents in six competencies. For interpersonal and communication skills (IPS) and professionalism (PROF), patient surveys are identified as the "most desirable" method. The logistics of administering patient surveys are challenging for residency programs, so patient feedback is often excluded from the assessment process. We have found no published studies using patient questionnaires to specifically evaluate emergency medicine residents on these competencies. Objectives: Our aim was to develop and validate a patient survey to assess residents on select ACGME competencies and answer the following questions: (1) Can a short six-question survey be used to detect differences in IPS and PROF skills between resident levels and sex of residents? (2) Do the patient assessment scores of residents correlate with faculty scores? Methods: This was a prospective observational study conducted in the ED at the University of California, San Francisco, a metropolitan tertiary referral center. We developed a brief survey and administered it at the time of discharge using an electronic touch screen tablet. Participants were adult patients who were treated and discharged from the ED. We excluded patients with primary psychiatric diagnoses or who were being admitted to the hospital. We created a mean composite score of the patient surveys and faculty evaluations. Using a one-way analysis of variance (ANOVA), we determined if there were any differences on either instrument by resident level or sex. We used linear regression to determine if the patient survey ratings predicted faculty ratings. Results: We collected 123 patient surveys. The reliability of the patient survey was 0.80 and the reliability of the evaluation of the residents by faculty was 0.90. One-way ANOVA revealed no difference in scores by resident level or sex on either instrument. The scores on the patient survey did not predict residents' evaluation scores on IPS and PROF by faculty (B=0.2, p=0.3). Conclusion: This survey detected no differences in IPS and PROF skills between resident levels and sex, and patient scores were not predictive of faculty scores. Can we Rely on EM Resident Selfassessment of EM Background: The ability of residents to make informed selfassessments of their medical knowledge is an important component of the newly initiated milestones. Kruger and Dunning (1999) demonstrated that low performers overestimate their skill while high performers underestimate their skill. It is unknown whether residents accurately utilize data from the in-training exam in their selfassessments. Objectives: We hypothesize that EM residents will conform to the model described by Kruger and Dunning. We asked: Do resident selfassessments correlate with their performance on the in-training exam? Is this self-assessment consistent with subsequent performance? Methods: We analyzed data from a multi-institutional prospective cohort research study conducted on a convenience sample of 54 residents from four ACGME-accredited EM residencies. Residents completed a self-assessment of their knowledge in EM core topics. Subsequently, they participated in sessions using 10 topic-specific Rosh Review questions and an audience response system (ARS). We compared residents' self-assessment of medical knowledge with prior in-training exam scores and their ARS scores. Five topic areas were analyzed. Self-assessments were averaged across topics and codified to indicate if the residents felt their knowledge was above or below average. In-training exam scores and ARS scores were codified as above or below the mean by PGY level. The residents were split into four categories: low performer, accurate assessment; low performer, inaccurate assessment; high performer, accurate assessment; high performer, inaccurate assessment. We used descriptive statistics. Comparisons were made using Fisher's exact test. Results: We found that high performers were more likely to accurately self-assess compared to low performers. Conclusion: As expected, "low performers" were inaccurate with their delayed self-assessment and their performance on the ARS sessions. "High performers" were more accurate. Objectives: To determine the effect of a low-resource-demand, easily disseminated computer-based teamwork training intervention on teamwork behaviors and patient care performance in code teams. Methods: Design: A randomized comparison trial of computerbased teamwork training versus placebo training was conducted from August 2010 through March 2011. Setting and Subjects: Subjects (N=231) were fourth-year medical students and first-, second-, and third-year emergency medicine residents at Wayne State University. Each participant was assigned to a team of 4-6 members (Nteams=45). Interventions: Teams were randomly assigned to receive either a 25-minute evidence-based computer-based training module targeting appropriate resuscitation teamwork behaviors, or a placebo training module. Measurements: Teamwork behaviors and patient care behaviors were video recorded during high-fidelity simulated resuscitations and coded by trained raters blinded to condition assignment and study hypotheses. Teamwork behavior items (e.g., CXR findings communicated to team) were standardized before combining to create overall teamwork scores. Similarly, patient care items (e.g., CXR correctly interpreted) were standardized before combining to create overall patient care scores. Subject matter expert reviews and pilot testing of scenario content, teamwork items, and patient care items provided evidence of content validity. Results: When controlling for team members' medically relevant experience, teams in the training condition evidenced better teamwork (F(1,42) = 4.81, p < .05; g2p = 10%) and patient care (F(1,42) = 4.66, p < .05, g2p = 10%) than did teams in the placebo condition. Methods: Patients were enrolled at an academic ED from 7/2009 to 2/2012. English-speaking patients who were 65 years or older and in the ED for less 12 hours were included. Patients who were deaf, blind, comatose, or severely demented were excluded. The RB-CAM ( Figure) is a modification of the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU) which is a brief (<2 min) delirium assessment that can be reliably used by non-physicians, but may have limited sensitivity in non-critically ill patients. The CAM-ICU and RB-CAM primarily differ in how inattention is assessed. In addition to the Vigilance A ("Squeeze my hand when you hear the letter 'A'") used in the CAM-ICU, the RB-CAM asks the patient to recite the months backwards from December to July. The RB-CAM was performed by a research assistant (RA) and emergency physician. RAs were college graduates, emergency medical technicians, or paramedics. The reference standard for delirium was a comprehensive psychiatrist assessment using DSM-IV criteria. All assessors were blinded to each other and their assessments were conducted within 3 hours. Sensitivities and specificities and their 95% confidence intervals (95%CI) were calculated for the RA and emergency physician. Kappa coefficients between the RA and emergency physician were calculated for reliability. Of the 406 patients enrolled, 50 (12%) were delirious. The median age (IQR) was 74 (69, 80) years old, 202 (50%) were female, 57 (14%) were non-white race. The RB-CAM's diagnostic performance for the RA and emergency physician are shown in the Table. The RB-CAM had very good sensitivity and excellent specificity for delirium in both raters. Interobserver reliability between the RA and emergency physician was very good (kappa = 0.87). Conclusion: Non-physicians can perform the RB-CAM reliably and with very good diagnostic accuracy. This may be a useful method to rapidly assess for delirium in ED research studies. Delirium is common among elderly emergency department (ED) patients and associated with high morbidity and mortality, yet is diagnosed by ED physicians only 17% of the time. Accordingly, improved detection methods for delirium are needed. Objectives: To determine whether biomarkers of endothelial dysfunction and inflammation are associated with delirium in the ED setting. Methods: We performed a prospective, observational study of elderly patients in our 55,000 visit urban university ED. Inclusion criteria: ED patients >=65 years, informed consent, and discarded blood sample obtained. A trained research assistant performed a structured mental status assessment after which delirium was determined using the Confusion Assessment Method. Using a random-number generator, we selected a total of 114 specimens for analysis, with a 2:1 ratio of non-delirious to delirious subjects. Inflammatory markers (including interleukins [ILs] , macrophage inflammatory proteins [MIPs] , and Tumor Necrosis Factor-alpha [TNF-a]) and markers of endothelial function (intercellular adhesion molecules [ICAMs] , vascular adhesion molecules [VCAMs] , and the VEGF signaling cascade) were a priori selected for analysis and measured using a combination of ELISA and Luminex multiplex platform assays. Medians and 25%-75% interquartile ranges (IQRs) were calculated for delirious and non-delirious subjects and compared with a Wilcoxon rank sum test. Subsequently a stepdown Bonferroni correction was applied to adjust for multiple testing and logistic regression was performed to adjust for age, sex, severity of illness, and comorbid burden. Results: 353 patients were enrolled in the study and had discarded blood samples available for analysis. Of the 114 specimens selected for analysis, 37 (32%) were from delirious subjects. Twenty biomarkers were assayed, of which five were significantly associated with delirium (IL-8, IL-10, MIP-1a, TNF-a, sVCAM-1, see table) . (geriatric) emergency departments (SEDs) that are opening around the US in response to a growing senior demographic. This study evaluates the change in satisfaction following introduction of a new SED as measured by "likelihood to recommend" on the Press Ganey (PG) survey. The setting is a 45,000 annual visit community ED which established an SED in 2010, incorporating special physical features, screening for common senior challenges such as depression, dementia, and drug interactions, and an online geriatric emergency training program for all ED physicians and nurses. Small changes on the PG survey can result in large changes in percentile, and change between "Yellow" or second decile to "Green," or top decile. Objectives: The objective of the study is to determine if a new dedicated SED for patients > 65 years of age, and specialized screening and social work intervention for all ED patients in this age group, will significantly improve patient satisfaction as measured by "likelihood to recommend" on PG satisfaction surveys. Methods: The patient's PG rating of "likely to recommend" was used. Included pts were age >= 65, and excluded for triage level 1, unable to complete a survey, and prisoners. "Likelihood of recommending" was used to estimate improvement in scores, using a test for two proportions, comparing a group of patients aged >= 65 years surveyed before and after the implementation of the new SED. This included patients not seen in the new, specialized senior area but who did receive senior comorbidity screening. Results: Females constituted 62% of the patients. There were 118 patients in the pre-intervention survey group and 212 in the postintervention group. The "likelihood of recommending" the ER increased from 89.9 (second decile) in the pre-SED group as compared to 92.2 (top decile) in SED group. This was found to be significantly different based on test for two proportions with a P < 0.001 (Minitab 16 â Statistical Software). The implementation of a new senior emergency department and senior screening and enhanced social work resulted in a small but significant increase in the patient's likelihood of recommending the ED. Using this measure, the ED was moved into the Conclusion: We found that pRBC transfusion was associated with lower hemoglobin level and mean arterial pressure. However, the mortality rate was not found to be significantly different among patients with pRBC transfusion. The effectiveness of pRBC transfusion was not found to be different in different hemoglobin levels. The Prognostic Value of Brain Natriuretic Peptide in Combination with the Sequential Organ Failure Assessment Score in Septic Shock Won Young Kim, Tyler Giberson, and Michael Donnino Beth Israel Deaconess Medical Center, Boston, MA Background: The mortality from septic shock remains high despite advancements in supportive care, and much effort has been used to identify factors for determining severity and outcome. The sequential organ failure assessment (SOFA) score, a reliable marker for sepsis severity, has been shown to predict sepsis outcomes in different models. However, this prognostic performance has been inconsistent, especially in emergency department (ED) populations. SOFA score requires only inotropic agents as a measure of cardiac dysfunction, which may not accurately prognosticate in the sepsis cohort. Thus, an additional marker for cardiac dysfunction may be needed. Recently, brain natriuretic peptide (BNP) level has been described as being elevated in patients with septic shock, comparable to those found in heart failure patients. Objectives: To evaluate the prognostic value of BNP in combination with the SOFA score in patients with septic shock at the time of ED evaluation. We hypothesize that the addition of BNP to the SOFA score will improve its predictive ability. Methods: Study subjects included ED patients with septic shock who had measured BNP at the time of diagnosis. All patients were treated with the algorithm of early goal directed therapy between January 2010 and December 2012. SOFA scores were calculated at ED recognition. The primary outcome was 28-day mortality. The area under the receiver operating characteristic (ROC) curve was used to compare the predictive ability of SOFA score alone and in combination with BNP. Objectives: We hypothesized that 1) women have lower bundle completion rates and 2) completion of specific bundle elements differ by sex. Methods: This was a retrospective, observational study in an urban academic ED and national SSC Database study site. Consecutive patients (age >18) admitted to intensive care with severe sepsis or septic shock and entered into the SSC database from 10/05 -2/12 were included. Completion of overall and individual bundle elements was exported from the SSC database. Two trained research assistants, blinded to the primary outcome, used a standard abstraction form to obtain patient data, including SOFA scores and comorbidities. Interrater reliability was assessed on a random sample of charts. Univariate Conclusion: There are no sex-specific disparities in bundle completion rates and in-hospital mortality rates. Women were less likely to get antibiotics within 3 hours, a key element of the overall SSC bundle. Further research is needed to examine how illness severity or other patient specific factors may differentially affect completion of overall bundle or individual elements in women and men. The Impact of Crowding Upon Implementation of Early Goal Directed Therapy in the Emergency Department. David F. Gaieski 1 , Anish K. Agarwal 1 , Jesse Pines 2 , Munish Goyal 3 , and Frances Shofer 1 1 The University of Pennsylvania, Philadelphia, PA; 2 George Washington University, Washington DC; 3 Georgetown University, Washington DC Background: Optimal management of severe sepsis patients includes early identification, aggressive resuscitation, including Early Goal-Directed Therapy (EGDT) in eligible patients, and timely administration of intravenous fluids (IVF) and antibiotics. Critically ill patients require significant time commitments and care coordination in emergency departments (ED), which are treating expanding populations while lacking a sufficient provider workforce. Objectives: We hypothesized that increased ED crowding would decrease utilization of EGDT, delay time to IVF and antibiotics, and increase mortality for EGDT-eligible patients. Methods: Retrospective chart review of EGDT-eligible (lactate > 4 mmol/L or persistent hypotension) severe sepsis patients ( ! 2 SIRS criteria; a confirmed or suspected source of infection; presence of at least one acute organ dysfunction), admitted to an urban, Level I trauma center from the ED, 5/2008-2/2010. Four validated measures of ED crowding (ED occupancy, waiting patients, admitted patients, and patient-hours) were assigned to each patient at the time of triage and the associations between them and time to antibiotics and fluids or receiving EGDT were tested by analyzing trends across crowding quartiles, using analysis of variance on the ranks or Cochran-Armitage trend tests, respectively. Results: 1,095 EGDT-eligible severe sepsis patients were identified; 675 were treated with EGDT. Mean age was 58.9 years; 43% were Caucasian; 53% African-American, in-hospital mortality was 26%. A significant decrease in EGDT implementation occurred as ED inpatient boarding increased; time to IVF increased as boarding ED patients increased and time to antibiotics trended similarly as boarding ED patient hours increased (table) . Mortality was not affected by crowding parameters. Conclusion: Boarding admitted inpatients within the ED decreases the initiation of EGDT for patients in EGDT-eligible severe sepsis patients. Times to critical interventions (IVF, antibiotics) were also significantly increased as ED patient hours and inpatient boarding increased. These differences may represent changes in ED staffing, triage methods, or location of EGDT initiation. As crowding increases, EDs must create systems that optimize delivery of time-sensitive therapies to critically ill patients. Methods: Five domestic swine weighing 50-60 kg were anesthetized, intubated, and ventilated. Arterial and venous lines were placed via femoral cutdown, a Swan-Ganz catheter was placed into the pulmonary artery, and placements were confirmed by waveform. The abdomen was opened via midline incision, the cecum was identified, 2-3 vessels were ligated to create an ischemic insult, and the cecum was perforated with a 1 cm incision using electrocautery. The peritoneum was inoculated with 1 g/kg of fresh feces. Swine were observed until the onset of hypotension (MAP<60) and were then given a fluid bolus and pressor agents to keep MAP 50-60 mmHg. Animals were observed until SvO2 reached 50-60%, at which time they were resuscitated using a modified Rivers protocol. Vital signs were recorded every 10 minutes. Animals were euthanized at the end of the experiment. Conclusion: NH residents, age >65, immunosuppression, and neutropenia are not significant risk factors for developing resistant organism infections in cases of severe sepsis admitted from the ED. ESRD is a significant risk factor (4-fold increase) for resistant organism infection. Antimicrobial therapy for ESRD patients admitted from the ED with severe sepsis should include consideration for addressing resistant organisms. Background: The incidence of severe sepsis is increasing, has an in-patient mortality rate approaching 30%, and costs the health care system over $24 billion annually. There is a well-established relationship between higher case volume and improved outcomes in similar timesensitive emergency conditions such as cardiac arrest, STEMI and ischemic stroke. Objectives: We sought to determine the relationship between hospital factors including case volume, urban location, number and type of organ dysfunction, and mortality from severe sepsis. We hypothesized that hospitals with higher case volume have lower adjusted in-hospital mortality. Methods: We performed a retrospective analysis of nationally representative data using the Nationwide Inpatient Sample. Results: We identified a total of 854,346 (weighted total of 4,201,489) cases of severe sepsis over a 6-year period (2004) (2005) (2006) (2007) (2008) (2009) Methods: This was a prospective, observational study conducted at a tertiary referral center. Participants were physicians involved in the resuscitation of OHCA patients who were enrolled in our institutional post-cardiac arrest clinical pathway that included TH. Immediately following patient resuscitation in the ED, physicians recorded their prediction regarding patient survival and neurologic outcome on a standardized questionnaire. Neurologic outcome was assessed by the cerebral performance category (CPC). Good neurologic outcome was defined as CPC 1 and 2. Patient outcomes were retrieved from our institutional cardiac arrest quality assurance registry. Objectives: We hypothesized that centers engaged in aggressive post-cardiac arrest care centered on therapeutic hypothermia will utilize invasive hemodynamic monitoring to ascertain adequacy of resuscitative endpoints including central venous pressure (CVP) and central venous oxygen saturation (ScvO2). Methods: This study is a collaboration among four centers with aggressive PCAS and TH protocols (BID, Penn, Pitt, VCU). It is a secondary analysis from prospectively collected data of out-of-hospital cardiac arrest patients who underwent TH. The primary objective of this study was to determine the use of invasive hemodynamic monitoring in the care of PCAS patients undergoing TH. The secondary objective was to determine if either CVP or ScvO2 were associated with better neurologic outcome measured as cerebral performance category (CPC), dichotomized into "good" (1 or 2) or "poor" (3, 4, or 5 Objectives: We hypothesized that the use of neuromuscular blockade is associated with improved outcomes after out-of-hospital cardiac arrest and improved oxygenation. We performed a post-hoc analysis of a prospective multicenter observational study of adult cardiac arrest from 6/2011 to 3/ 2012. Inclusion criteria were: adult (> 18 years) comatose survivors of out-of-hospital cardiac arrest. The primary exposure of interest was neuromuscular blockade for 24 hours following return of spontaneous circulation and primary outcomes were in-hospital survival and neurologically intact survival. Secondary outcomes were evolution of oxygenation (PaO2:FiO2), and lactate clearance. We tested the primary outcomes of in-hospital survival and neurologically intact survival with multivariable logistic regression. Secondary outcomes were tested with multivariable linear mixed-models. Conclusion: In this population of out-of-hospital cardiac arrest, we found that early neuromuscular blockade sustained for a 24-hour period is associated with an increased probability of survival. Secondarily, we found that early, sustained neuromuscular blockade is associated with improved lactate clearance. Objectives: We hypothesized that patients who suffer worse neurologic outcomes will have more significant hemodynamic derangements in the early hours post-arrest than those with good neurologic outcome. Methods: This is a collaborative study among four centers with aggressive TH protocols (BID, Penn, Pitt, VCU). It is a secondary analysis from prospectively collected data of out-of-hospital cardiac arrest patients who underwent TH. The primary outcome is neurologic status at hospital discharge measured as cerebral performance category (CPC), dichotomized into "good" (1 or 2) or "poor" (3, 4, or 5 Methods: A systematic review of the literature was performed by searching the following databases from their inception without language restrictions: MEDLINE, PubMed, ProQuest, Cochrane, CINAHL, EM Abstracts, and EMBASE. Content experts were contacted and bibliographies of relevant studies were reviewed to identify additional references. Quality assessment of included studies was independently performed by two investigators using the Cochrane Collaboration's tool for assessing risk of bias. Two authors also independently extracted data from the included studies using standardized data collection forms. Discrepancies were resolved by consensus or adjudication by a third reviewer. The primary outcome was overall survival. The secondary outcome was favorable neurologic outcome. A priori subgroup analyses were defined by initial cardiac rhythm: 1) ventricular fibrillation, and 2) pulseless electrical activity/ asystole. Heterogeneity was assessed (chi-square and the I2 statistics) and results were pooled if appropriate using a fixed effects model. Objectives: We hypothesized that prehospital activation of a cardiac arrest response team would augment emergency department resuscitation resources and thus improve TH utilization and outcomes for out-of-hospital cardiac arrest patients. Methods: An emergency department cardiac arrest response team (eCART) comprised of a cardiology fellow, pharmacist, respiratory therapist, hospital chaplain, and a therapeutic hypothermia expert ("Dr. COOL") was created at an urban academic hospital. The eCART team was immediately activated by emergency department staff after receiving EMS radio notification of an arrest transport. We compared outcomes in consecutive adult OHCA patients during the 8-month period before (PRE) and 12-month period after (POST) initiation of the eCART activation system. Objectives: To determine the rates of neurologically intact survival in post-arrest patients with prolonged downtime who were treated with comprehensive post-arrest care. We hypothesized that prolonged downtime is not universally fatal and good neurological outcome is possible. documentation scribes remains an area that has been inadequately studied. Practice revenue relies upon recovery of professional and facility charges from accurate coding of medical records. Prior studies have consistently reported the effect of these services on professional charges with no studies assessing the effect on facility charges. We evaluated the effect of EM scribes on a variety of administrative metrics including facility charges. Methods: This study was a prospective cohort of emergency department (ED) patients seen by attending physicians during a 6month study period. At the initiation of the study, scribe services were introduced to this urban EM practice to cover approximately 20% of ED physician shifts. The ED scribe accompanied the physician for the entirety of the ED shift entering patient encounters into the electronic medical record. The study design was a comparative analysis of physician metrics between clinical shifts with and without scribes. Excluded encounters included fast-track and advance care practitioner patients. Data collected included the number of patient encounters with and without scribes for each physician, professional charges, facility charges, and professional relative value units (RVUs) coded. Data cohorts were compared using the Wilcoxon rank sum test with descriptive statistics to summarize data. Results: During the study period, 35,301 patients were seen by physicians with 7,420 (21%) seen with an ED scribe. Costs of the scribe program during the study were approximated at $81,000. Professional RVU and visit charges increased during scribe encounters with a mean increase of 17% for both metrics during the period ($52 per visit). Facility charges increased by 13% ($137 per visit) for scribe encounters. Total increases in charges attributed to more accurate coding from scribed physician encounters was $388,364 and $1,021,511 for professional and facility fees, respectively. Conclusion: Physician scribes had a substantial effect on enhanced practice revenues, both for professional and facility charges. The return on investment for an EM scribe service relies upon a number of factors including recovery rates for charges and negotiated contractual services. Practices should evaluate the effect on facility revenue as part of any EM scribe program analysis. Decreasing ED Overcrowding Via Implementation of a Hospital-Wide Surge Plan Shira Schlesinger, William Mallon, Rolando Valenzuela, and Christopher Celentano LAC+USC Medical Center, Los Angeles, CA Background: As part of our nation's emergency response plans, hospitals and EDs are mandated to create plans for responding to disasters that enable increased service provision in relatively austere environments. These disaster response plans may temporarily shift staff roles and responsibilities and repurpose equipment until the period of supply-demand mismatch has passed. Daily and seasonal fluctuations in ED census can themselves be seen as smaller disaster overflow scenarios, requiring similar protocols to enable efficient care of an increased patient load within the constraints of current staffing and budgets. Objectives: To determine the effect of implementing a hospital-wide disaster surge protocol on daily ED crowding. Methods: This was a prospective study of crowding at a large, academic, urban county hospital with 160,000 annual ED visits. Crowding was assessed with the National Emergency Department Overcrowding Study (NEDOCS) calculator. NEDOCS scores were recorded for 4-month periods before and after implementing a hospitalwide surge plan. The primary outcome was proportion of time spent in the NEDOCS category "Dangerously Overcrowded" before and after implementation. Secondary outcomes included proportion of time operating in NEDOCS "Severely Overcrowded", average LOS, and number of patients whol left without being seen (LWBS). Objectives: We sought to assess the effects on length of stay (LOS), cost, quality and safety for all ED patients undergoing CT imaging for abdominal pain, by changing from a historical imaging methodology of IV and oral contrast to one of IV contrast alone. Methods: This prospective cohort study was conducted over a three month period, from May 2011 to August 2011, at a large community ED (annual volume of 100,000). All consecutive patients presenting to the ED with abdominal pain necessitating CT scanning were administered IV contrast alone without oral contrast. Using control charts and ttesting, effects on turn-around-time, patient LOS, and anti-emetic utilization were measured. Total cost savings were estimated. Also assessed was the need for subsequent imaging with the administration of oral contrast. Results: 1782 patients were included. Turn-around time (time from order to completion of study) prior to the pilot averaged 144 minutes. After initiation of the new protocol, turn-around-time decreased to 90 minutes (p < 0.01). LOS for discharged patients decreased by one hour (p < 0.01), while that for admitted patients decreased by an average of 46 minutes (p < 0.05). Antiemetic utilization decreased as well, though at p = 0.06. Estimated annualized cost savings from the change in protocol were over $500,000. Ten patients (0.56%) had repeat CT scans with oral contrast, with only one patient requiring a modification in management. No adverse outcomes were identified by not using oral contrast. Conclusion: Abdomino-pelvic CT imaging with IV contrast alone for all patients presenting to the ED with abdominal pain offers a safe and efficient alternative to CT with oral contrast, yielding not only improved ED throughput, but significant cost savings as well. Methods: This single center, retrospective, before/after analysis was conducted at a Level I academic trauma center that introduced a sepsis CDSS on 7/13/2011 to increase lactate testing in adults with sepsis. Study time periods (pre-alert: 11/1/10-6/30/11; post-alert: 11/1/11-6/30/12) were selected a priori as the longest consecutive, seasonally similar dates without significant changes to ED staffing or infrastructure. Study inclusion criteria were: age ! 18 years old; no ED visit within the 12 months prior to study periods; sepsis alert criteria ( ! 2 SIRS criteria documented within 120 minutes); and infectious ED diagnosis OR symptom diagnosis and antibiotic administration in the ED. ED vital signs (within 6 hours of triage) were queried with an automated algorithm for all ED visits meeting inclusion criteria. Objectives: To compare the number of ED visits and hospitalizations among discharged ED patients with a primary diagnosis of AF who followed-up with an AF clinic and those who did not. Methods: A retrospective cohort study and medical records review including three major tertiary centres in Calgary, Canada. A sample of 600 patients was taken representing 200 patients referred to the AF clinic from the Calgary Zone EDs and compared to 400 matched control ED patients who were referred to other providers for follow-up. The controls were matched for age and sex. Inclusion criteria included patients over 18 years of age, discharged during the index visit and seen by the AF clinic between January 1, 2009 and October 25, 2010. Exclusion criteria included non-residents and patients hospitalized during the index visit. The number of cardiovascular-related ED visits and hospitalizations was measured. All data are categorical, and were compared using chi-square tests. Results: In the six months following the index ED visit, the odds of an emergency visit for those who attended an AF clinic were similar to the odds for matched controls who did not attend the clinic (OR 0.83, 95% CI 0.57-1.22). The odds of a hospital admission for those who attended an AF clinic were similar to the odds for matched controls who did not attend the clinic (OR 0.56, 95% CI of 0.29-1.09), but when adjusted for site location and type of AF, the estimated odds ratio was statistically significant (0.45, 95% CI 0.21-0.94, p=0.035). Conclusion: Based on our results, referral from the ED to an AF clinic for patients with first onset or symptomatic AF was not associated with a significant reduction in subsequent CV-related ED visits compared to patients seen by usual care; however, AF Clinic referral was associated with reduced subsequent CV-related hospitalizations when adjusted for site and type of AF. Background: Atrial fibrillation (AF) is often newly diagnosed in the ED. Not all patients with AF will progress to sustained AF (ie, episodes lasting > 7 days), which is associated with increased morbidity. The HATCH score stratifies patients with newly diagnosed or paroxysmal AF according to predicted risk for progression to sustained AF within 1 year. The HATCH score has never been tested in ED patients. Objectives: We hypothesized that the HATCH score may identify ED patients with newly diagnosed AF who are likely to progress to sustained AF within 1 year. Our aim was to evaluate the HATCH score's predictive capability in ED patients with newly diagnosed AF. Methods: We conducted a retrospective, single center, cohort study from 8/1/05 -7/31/08 of 253 ED patients with newly diagnosed AF for whom rhythm status was known at 1 year following their ED visit. Two investigators, blinded to HATCH scores, independently reviewed each medical record and determined if the patient progressed to sustained AF within 1 year. Disagreements were resolved by consensus between the two reviewers, with a third investigator adjudicating any conflicts. The exposure variable was the HATCH score at initial ED visit. The HATCH score is an ordinal scale ranging from 0 to 7 points and is calculated based on the following: 1 X (Hypertension) + 1 X (Age > 75 years) + 2 X (Transient ischemic attack or stroke) + 1 X (Chronic obstructive pulmonary disease) + 2 X (Heart failure). The primary outcome was rhythm status by 1 year from initial AF diagnosis. We constructed a ROC curve and calculated the AUC to estimate the HATCH score's ability to predict progression to sustained AF. Results: Overall, 61/253 (24%) of patients progressed to sustained AF by 1 year. The HATCH score was only modestly predictive of progression to sustained AF with an AUC of 0.62 (95% CI: 0.54 to 0.70). The figure reports the prevalence of HATCH scores 0 through 7 and the proportion of patients with each score who progressed to sustained AF. Of patients with a HATCH score of 0, 18.8% of patients progressed to sustained AF. Conclusion: Among ED patients with newly diagnosed AF, the HATCH score was only modestly predictive for progression to sustained AF. Because only two patients had a HATCH > 5, this recommended cutoff was not useful in identifying high-risk patients in our cohort. Refinement of this decision aid is needed to improve its prognostic accuracy in the ED population. Objectives: We describe the feasibility of simplifying the AFEQT for short-term QoL assessments via phone within a diverse population of AF/F patients receiving emergency department (ED) care across seven community hospitals. Methods: As part of a multicenter observational study of ED management and short-term outcomes of AF/F patients, we adapted the AFEQT for phone follow-up with patients one month after their ED visits for newly diagnosed or recent-onset ( 48 hours) AF/F. We kept the original 20-item AFEQT format, but condensed the seven-point Likert response scale to five for ease of interviewing. We added questions about health in weeks prior to the ED visit, effectiveness of ED treatment, and medication compliance. Patients were consented for participation by phone and excluded if: unable to discriminate between AF/F and other comorbidities; unable to recall diagnosis; too ill to talk; deceased; non-English speaking. Conclusion: These interim results suggest that our modified AFEQT is a practical and feasible research tool for QoL assessments within a diverse subpopulation of AF/F patients. Additional analyses will evaluate the association between QoL scores and patient and treatment factors. Future investigations utilizing this and other disease-specific tools may consider modifications, such as adaptation to phone interview, to better match the instrument to the study population and survey modality. Objectives: In this study we compared the change in NT-proBNP levels over 6 hours in patients with vaso-vagal and arrhythmic syncope to determine whether this change can predict arrhythmic syncope. Methods: Thirty-three patients were considered, including eighteen with arrhythmic syncope as they underwent controlled ventricular tachycardia (VT) or ventricular fibrillation (VF) during device safety testing of an ICD implant or battery replacement. These patients were compared to fifteen patients matched for age and co-morbidities who during a tilttest were diagnosed with vaso-vagal syncope. For each patient, a blood sample for NT-proBNP was collected at baseline and six hours after the episode of VT/VF or vasovagal syncope. We calculated the percentage increase in 6-hour NT-proBNP concentration between the two groups using non-parametric techniques. We also calculated the area under a receiver operating characteristic curve (AROC) with 95% confidence intervals and report the best cut-point to maximize sensitivity and specificity of the % increase in NT-proBNP to detect arrhythmic syncope. Results: The 6-hour change of NT-proBNP concentrations between patients who had episodes of VT/VF and patients with vasovagal syncope was significantly different with a median increase of 32% in VT/VF vs. 5% in the vasovagal syncope group (p<0.003). The area under the ROC curve to predict arrhythmic syncope was 0.8 (95% CI 0.65 -0.95). The best cut-point identified in this study to discriminate arrhythmic from vaso-vagal syncope was a 25% increase which had a sensitivity of 67% (95% CI 45-88), specificity 87% (95% CI 69-100), and positive likelihood ratio of 5. The results of this study suggest that a 6-hour NT-proBNP increase may be able to predict arrhythmic syncope. Future work is needed to confirm these findings in undifferentiated emergency department patients who present with syncope. Background: IV placement is the most common ED procedure and has been shown to cause an average of 3/10 pain. However, many patients in the ED undergo multiple needle-sticks (MNS) due to difficult venous access or repeat blood draws. It is unknown whether MNS results in higher pain scores over a single needle-stick (SNS). Objectives: To determine the association between number of needlesticks and patient's pain levels due to IV placement in the ED. We hypothesized that patients undergoing MNS will have higher pain scores. We performed a prospective observational study of patients presenting to an urban academic teaching hospital with an annual census of 65,000. Patients were included if they had an IV placed. Data were collected by trained research associates who interviewed patients immediately after successful IV placement during periods of block enrollment from July through October 2012. Patients were excluded if complete data was not obtained. The primary outcome was the highest level of pain experienced during IV placement on a validated 1-10 scale. Secondary variables included patient characteristics, whether the patient rated IV placement as the most painful experience during the ED stay, and largest catheter size using during IV placement. Variables were compared between SNS and MNS patients using chi-square analysis and Mann-Whitney U test. Results: 548 patients met inclusion criteria of whom 23 were excluded due to incomplete data leaving 525 for data analysis. Demographic data and patient characteristics are presented in the table. Overall the median pain level was 2 (IQR 2-3). Median pain level was higher in the MNS group (5, IQR 3-7.3) compared with the SNS group (2, IQR 1-4), p<0.001 (figure). 59.7% (95CI 51-69%) of MNS patients rated IV placement as the most painful experience in the ED compared to 45.6% (95CI 40.7-50.7%) of SNS patients, p=0.006. Catheter size was not significantly associated with pain scores (p=0.07). Conclusion: MNS is associated with increased patient pain. Methods to reduce unnecessary needle-sticks may improve patient satisfaction, however this requires further study. procedures utilize infiltrated local anesthetic (LA). Despite local infiltration, many patients still report significant pain and discomfort both during administration of LA , as well as during I&D itself. Topical anesthetic agents are another option that may offer similar efficacy. Objectives: The anesthetic efficacy and overall patient satisfaction of LA are compared to that of topical administration of lidocaineepinephrine-tetracaine (LET) gel. Methods: This study is a randomized, non-blinded, clinical trial in a convenience sample of 15 patients with cutaneous abscesses presenting to an academic emergency department between 2/2012 and 11/ 2012. This was designed as a non-inferiority study to evaluate for a difference in pain control. Patients consenting to enrollment were randomized to either local infiltration of 1% lidocaine without epinephrine or topical application of LET gel (4% lidocaine, 0. 05% epinephrine, and 0.5% tetracaine). Both pain associated with the procedure and patient satisfaction were measured on a 10-point Likert pain scale. Groups were analyzed based on intention to treat using the t-test. Based on a global alpha of 0.05 with a power = 90% and an effect size (ES) of d=0.63, we estimate that we need 54 patients per arm, or 108 total. Results: While still currently enrolling patients, a total of 15 patients were randomized to local infiltration (7) and topical administration (8) of anesthetic. Groups were similar in baseline characteristics. There was no statistically significant difference in pain scores reported by patients between the LA group compared to patients in the LET group with a mean (95%CI) of 5.6 (3.4-7.7) versus 6.6 (4.9-8.3) respectively. There was no statistically significant difference in patient satisfaction between the lidocaine vs. LET group [8.4 (6.9-9.9) versus 7.8 (6.0-9.7), respectively]. Conclusion: At the time of our interim analysis there is no statistically significant difference in overall patient satisfaction or difference in perceived pain when comparing local infiltration of lidocaine anesthetic to topical administration of anesthetic. However, at this time our study is underpowered to conclude that LET gel is noninferior to LA . At this time we believe no sweeping conclusions can be made about the optimal management of pain control for abscess I&D. Methods: Prospective blinded randomized controlled efficacy and safety trial of vapocoolant spray on pain in adults ( ! 21 years) undergoing venipuncture in the ED at a large urban tertiary care hospital. Adults were randomized to normal saline placebo spray or vapocoolant spray (Gebauer's Pain Easeâ, 1,1,1,3,3 pentafluoropropane and 1,1,1,2 tetrafluoroethane) prior to venipuncture. Numeric rating scales (NRS) (1 to 10) were obtained after the spray was given and following venipuncture. Assessment and photographs of the venipuncture site were done pre-and post-application of both sprays. Vital signs and side effects were documented. Results: There were no significant differences in demographics between the two groups. Normal saline ( Discussion: Vapocoolant is effective and safe for treatment of the acute pain of venipuncture in ED patients with a significant (p< 0.001) decrease of 3 in mean NRS compared with NS (4.72 saline to 1.76 vapocoolant) and was well tolerated. There were no visible abnormalities at the site post application of the spray. Following application of the spray and prior to venipuncture, there was no significant difference in mean NRS between the sprays with a mean NRS < 1 for either spray, indicating that appropriate application of the vapocoolant spray was not painful or uncomfortable. Conclusion: Vapocoolant is effective and safe for the treatment of the acute pain of venipuncture in ED patients with a significant decrease of 3 in the mean NRS compared with NS (4.72 saline to 1.76 vapocoolant) and was well tolerated with very few minor side effects that resolved quickly. There were no visible abnormalities at the site post application of the spray. Following application of the spray and prior to the venipuncture, there was no significant difference in mean NRS between the NS or Vapocoolant spray with a mean NRS < 1 for either spray, indicating that appropriate application of the vapocoolant spray was not painful or uncomfortable. Objectives: To compare the effectiveness of the VVV versus standard approaches: sight (S) and sight plus palpation (S+P) for identifying peripheral veins for IV placement in adults treated in an ED. Methods: Experienced emergency nurses and physicians identified peripheral venous access targets appropriate for IV cannulation of a crosssectional convenience sample of English-speaking adults aged 18-97 years presenting for treatment of sub-critical injury or illness who provided consent. The clinicians marked the veins with different colored washable markers and counted them on the dorsum of the hand, ventral forearm, and in the antecubital fossa using the three approaches: S, S+P, and VVV. A trained research assistant photographed each site for independent counting after each marking and recorded demographics and BMI. Counts were validated using independent photographic analyses. Data were entered into SAS 9.2 and analyzed using paired t-tests. Background: The fear of needles, needle phobia (NP), is recognized as a subset of "blood-injection-injury" phobias and is associated with pain, fear, and vasovagal syncope during needle insertion. The prevalence of NP in the general population is estimated at 10-25%, however the prevalence of NP in the ED has never been reported. Furthermore, the incidence of NP in the subset of patients who undergo multiple needle-sticks (MNS) due to difficult venous access (DVA) may be higher, but this has not been studied. Objectives: To determine the prevalence of NP in ED patients. We hypothesized that NP is prevalent in the ED and may be increased in patients with DVA. Objectives: (1) To determine the level of coliforms in fresh water sources near the flood plains of coastal Long Island (LI) and Queens, NY (Qu). (2)To determine the efficacy of simple, home techniques to sterilize coliform-contaminated water. Conclusion: Coastal fresh water sources on LI are contaminated with fecal coliforms. In the absence of clean watery delivery, and means to boil water, utilization of household bleach is an effective method to sterilize water. This simple sterilization reduces the potential for an outbreak of infectious diarrhea and a public health emergency after a large scale disaster. Background: The storm and flood that closed multiple hospitals in the lower half of Manhattan predictably had a significant effect on our ED, which was the only hospital in the area still operating. Since the combined census of the closed EDs exceeded our yearly census (greater than 100,000), the possibility of massive shortages of supplies, space, beds and services was real. Objectives: The purpose of this study is to describe the characteristics of an ED population during a medical emergency caused by the shutdown of multiple hospitals. Our hypothesis was that the numbers of ED patients would greatly increase but the characteristics of the patients would remain stable as the largest closed hospitals were within 10 blocks of our facility. Methods: For this ongoing study we compared data from the period 10/29/2012 (when Sandy arrived) to 11/23/2012 with that of a control period, 7/01/2012 to 10/29/2012 on factors that would be likely to reflect differences in patient acuity. Specifically, we looked at acuity level in triage, number of ambulance arrivals, admission rate for ambulance arrival, walkout rates for those arriving by ambulance, overall walk-out rates, "medication refill" chief complaint, and percent of geriatric patients. Results: Daily ED census for the two periods increased less than feared, 388 vs. 322 (20%) with a peak of 474 patients and a nadir of 247 (all comparisons are p<0.01, except as noted). Ambulance arrivals increased to 143 from 87 (64%) but the admission rate for those was lower (64% vs. 68%) and the ambulance walk-out rate was much higher (15.4% vs. 4.6%), likely indicating a lower acuity for a subgroup of the storm-affected cohort. The overall census increase of 20% was responsible for an increase in admitted patients of only 15% (from 66 to 76) although confounding factors such as prolonged through-put time in the ED may also have played a role. Similarly, "medication refills" more than doubled, patients over 65 years of age increased by 25% (16% to 20%), but there was virtually no change in the assignment of acuity scores at triage with 4.7 % of Sandy victims assigned Level 1 or 2 vs. 4.8% of the controls (p=NS). Conclusion: Despite an increase in ambulance volume of two-thirds during the period of multiple ED closures, patient acuity did not increase. Continued study of this event will indicate if this pattern persists even after more facilities reopen. hospital-based EMS system that responds to approximately 26,000 calls per year. Protocol: All ALS requests for two weeks before Hurricane Sandy and two weeks after the hurricane were reviewed. Patient demographics, total calls, and the dispatch category of each call were reviewed. The percent of total calls for each dispatch category were calculated and compared before and after the hurricane. Results: There were 806 calls in the two weeks before the storm and 898 after the storm. The average daily volume was significantly lower before the storm compared with after (difference is 6.5 patients per day, 95% CI: 0.2, 12.8). The most frequent dispatch categories before and after the storm were as follows (written as Category (prestorm % of calls, post-storm % of calls)): "Respiratory" (21%, 21%), "Cardiac" (18%, 17%), "Unconscious" (10%, 9%), "Medical NOS" (8%, 8%), "Stroke/CVA" (5%, 3%), Traffic Accident" (4%, 3%), "Syncope" (4%, 3%), "Diabetic" (4%, 4%), "Seizure" (4%, 6%), and "Altered Mental Status" (3%, 5%). With the exception of "Stroke/CVA" (difference 1.9%, CI: 0.1, 3.8), there were no significant differences in the proportion of any of the dispatch categories before and after the hurricane. Conclusion: Despite the increase in total ALS calls in the two weeks before and after Hurricane Sandy, there were no clinically significant differences in dispatch category. This information may guide prehospital providers in preparing for natural disasters. The Disease Frequency Among Evacuees After the Great Eastern Japan Earthquake and Tsunami Takahisa Kawano 1 , Kouji Morita 1 , Osamu Yamamura 1 , and Hiroko Watase 2 1 Fukui University Hospital, Fukui, Japan; 2 Japanese Emergency Medicine Research Alliance Investigator, Tokyo, WA Background: The Great Eastern Japan Earthquake and Tsunami occurred at 2:46 pm on March 11, 2011. As a result of the earthquake and tsunami, 15,845 people lost their lives and many people had to live in shelters. After the earthquake and tsunami, many evacuees in the shelters ware affected with a variety of medical problems. However, the report of disease frequency among evacuees after The Great Eastern Japan Earthquake and Tsunami was lacking. Objectives: This study was conducted to provide disease frequency of evacuees after the Great Japan Earthquake and Tsunami. Methods: This is a retrospective chart review study. The medical information was obtained from the charts of the evacuees who visited the clinics set up at four shelters in Ishinomaki city and two shelters in Watari town from March 21, 2011 to April 10, 2011. Medical records without a date and time, diagnosis, or patient information were excluded from the study. We report the disease frequency, patient demographic and total number of patients per 1,000 evacuees during the study period. Concern has been raised that use of ePCR may decrease availability of EMS resources due to increased turnaround time at receiving emergency departments in order to complete the ePCR compared to paper charts. Objectives: The purpose of this investigation is to determine if there is a significant change in turnaround time after implementation of an ePCR. Methods: This was a retrospective analysis of computer-aided dispatch data from pre-and post-implementation of ePCR in a busy urban tiered private EMS system with an annual total call volume of approximately 250,000. Only 9-1-1 emergency calls resulting in transport to a receiving facility were included in the analysis. Two matched three month time periods (June, July, and August) were selected in consecutive years surrounding a February 1st, 2011 system wide ePCR implementation. These time blocks were chosen to allow for a familiarization period, as well as to reduce seasonal effects which may be a confounder. A Student's t-test was used to analyze the data using SAS 9.3 (Gary, NC). Results: A total of 24,982 emergency transports were analyzed, with 12,540 in the pre-ePCR implementation group and 12,442 in the post-ePCR implementation group. Both groups had a similar right-skewed distribution with a long tail. The large sample size allows central limit to apply and parametric tests to be utilized. Background: The first point of contact for many emergency department (ED) patients is prehospital care providers who often obtain useful information from relatives, bystanders, or the environment. This information is communicated via patient care reports (PCRs) that are completed at the transition of care. However, our experience has been that paper PCRs are often lost or misplaced prior to being reviewed by ED clinicians. The advent of electronic PCRs (ePCRs) represents an opportunity for clinical data sharing and improved communication. Objectives: To characterize ePCR utilization by ED providers when integrated with an ED information system (EDIS). Methods: We created a novel interface between a regional EMS provider's ePCR system and our EDIS. Records were matched based on a combination of demographic fields such as name, birth date, sex, and social security number as well as time of arrival within a 2hour window. Once matched, these ePCR forms were available for viewing with one click from the patient's EDIS record. We retrospectively reviewed ePCR utilization over a five month period from 5/11/12 to 11/21/12. The study took place at an academic Level I trauma center with annual ED volume of 55,000. Demographic data of patients and users was collected. Data were analyzed using Microsoft Excel. Descriptive statistics with 95% confidence intervals were calculated using SAS. Results: A total of 4,197 ePCRs were submitted by EMS. Objectives: The primary purpose of this study was to compare the risk of intracranial injury in minor head trauma patients on clopidogrel to those not taking clopidogrel. Methods: A retrospective review of all patients (age ! 15 years) who received head CTs in the ED for trauma during a 6-month span at an urban, academic Level I trauma center. Exclusion criteria include performance of the CT for medical etiologies, pre-injury use of warfarin, age < 16 years, and any traumatic injury not meeting criteria for minor head trauma (GCS of 15 on arrival with or without brief (< 1 minute) loss of consciousness. Patients who lost consciousness for unknown durations of time were included. Intracranial injury was defined as any blood in the intraparenchymal, subdural, subarachnoid, or epidural spaces. The records were reviewed by two emergency physicians following a brief instructional period. A kappa statistic was calculated. Medians (IQRs) were calculated for ordinal data. Step-wise logistic regression was used to assess for the presence of confounders. Results: 1560 head CTs were performed; 658 met inclusion criteria. Kappa statistics for each variable abstracted were considered "excellent" (> 0.8). The median (IQR) age was 37 (27-49) years. Males accounted for 482/658 (73%) of subjects. Ten subjects were taking clopidogrel prior to sustaining trauma, of whom three (30%) sustained intracranial injuries. In contrast, 648 were not taking clopidogrel, of whom 14 (2.2%) sustained intracranial injuries. After adjusting for age, sex, the presence of visible trauma above the clavicles, mechanism of injury, and pre-injury use of aspirin or clopidogrel, only use of clopidogrel remained statistically significant (OR 16.7; 95% CI 1.71-162.7). Conclusion: Pre-injury use of clopidogrel is a significant risk factor for the development of intracranial injury following minor head trauma. The small number of patients on clopidogrel in this preliminary study limited an accurate determination of the magnitude of this risk. We are now encountering patients on dabigatran, but there are few data regarding the effects of this new HMA outside of an industry-sponsored trial. A high-volume trauma center provides an opportunity to gauge the safety of dabigatran in a more clinically relevant context. This study will be the first to examine the mortality and severity of bleeding in trauma patients taking dabigatran. Objectives: The purpose of this study is to detect whether patients who suffer traumatic injuries while taking dabigatran experience greater mortality or require more blood transfusions than peers on warfarin, aspirin, or clopidogrel, or those not taking any HMAs. Methods: In this retrospective cohort study, all subjects were selected from the population of patients admitted to Shock Trauma Center between January 2010 and December 2011. All patients taking dabigatran prior to admission were considered cases. Two populations, one taking a combination of HMAs (e.g. warfarin, aspirin, or clopidogrel), and a control group not taking any HMAs, were matched to the dabigatran subjects using sex, age (+/-2 years), and exact Injury Severity Score (ISS). The primary endpoints were mortality and the number of blood products transfused in the 24 hours following injury. All statistics were calculated using SAS 9.2. Results: Fifteen trauma patients taking dabigatran prior to arrival were admitted during the study period. Compared to controls, patients on dabigatran tended to be male, older, had higher ISS, and longer lengths of stay (LOS) ( Table 1 ). There was no difference in mortality or number of transfusions in the dabigatran group compared to the control group ( Table 2) . None of these findings were statistically significant. Conclusion: This is the first study examining the safety of dabigatran in trauma. In our population, patients taking dabigatran were predominantly older men, and they had more severe injuries and longer length of stay, as compared to the average admitted trauma patient. Patients on dabigatran did not experience a greater mortality or have a greater transfusion requirement. This study is limited by being small and retrospective. To evaluate the independent risk of anticoagulation, patients meeting anatomic, physiologic, or mechanism of injury criteria for Level I triage were excluded. Drugs were categorized as anticoagulantcoumadin, lovenox; anti-platelet-clopidogrel, Aggrenox; or aspirin. Trauma center need was defined as an aggregate of ICU admission, non-orthopedic procedure within 24 hours, and death. Because of the likely interaction of drug use and age, a secondary analysis adjusting for age> or < 55 was performed. Odds ratios (OR) and 95% CI were calculated for associations between drug use and outcome using logistic regression. Results: A total of 8544 patients met inclusion criteria; 747 were excluded because medication use was unknown and 105 were removed due to being on more than one medication, leaving 7692 for analysis. The Conclusion: Oxygen saturation was the best predictor for pediatric pneumonia in our population and should be further studied in a prospective sample of children presenting with respiratory symptoms in a resource-limited setting. Background: Abdominal pain is the most common reason for visiting an ED, and abdominopelvic (abd) CT use has increased over the past decade. Reasons for this increase have not been well delineated. It is plausible that this has occurred because of perceived diagnostic accuracy, specifically with potentially life threatening conditions. To our knowledge, no one has evaluated the relationship between pretest probability of disease, disease severity, and abd CT ordering threshold. Objectives: To test the hypothesis that pretest probability of disease would vary based upon the suspected acute life-threatening diagnosis for abdominal pain patients in whom CT was ordered. Methods: Prospective study at three urban EDs using a shared electronic medical record with an electronic accountability tool implemented from Oct 2011 to Mar 2012 for all abd CT orders. Inclusion criteria: age >=18 years, non-pregnant, and chief complaint/ pain location of abdominal or flank pain. All attempts to order abd CT triggered the accountability tool which only allowed the order to proceed if approved by the ED attending physician. Using force field data entry, the attending was required to enter the suspected primary diagnosis and pretest probability (0-100%) of the primary diagnosis. The main outcome was pretest probability of primary diagnosis. Analysis of variance was performed to compare pretest probabilities by diagnosis and reported as means with 95% confidence intervals (CI). Results: 126 ED physicians were enrolled over 3 days. Compared to expert-adjudicated interpretation, the sensitivity to detect bleeding was 94% (STD 0.15) and the specificity was 87% (STD 0.33). Conclusion: After brief training, ED physicians can interpret video capsule endoscopy to endpoints of gross blood or no blood with high sensitivity and specificity. Repeated ED visits occurred in 11% (N=45). There was no difference between patient ages when spring/summer were compared with fall/ winter months (p<0.93). With regards to median age/yr of study, the following was demonstrated: 3.6/1999, 2.0/2000, 2.5/2001, 3.2/2002, 5.1/ 2003, 3.0/2004, 3.1/2005, 5.0/2006, 26/2007, 5.2/2008, 6.0/2009, 3.4/2010, respectfully (p<0.55) . There was, however, a difference with regards to age when stratified by sex. The median age for females was 26 years (95% CI 11.5-31.4) vs 3.1 years for males (95% CI 2.5-3.9) (p<0.0001). Of the adults presenting with intussusceptions, 22% (n=23) had prior gastric bypass surgery. Conclusion: While the majority of intussusceptions occur in pediatric patients (63%), a moderate number occur in adults. Additionally, the age of presentation of intussusception in females is higher when compared to males. Background: Describing the position of the appendix in relation to the psoas muscle is clinically important. An appendix is optimally visualized by ultrasound when located anterior to the psoas muscle. In this location it can be compressed between the anterior abdominal wall and the psoas muscle during imaging. The effect of patient sex and body mass index (BMI) on anterior positioning of the acutely inflamed appendix in relation to the psoas muscle has not been reported. Objectives: Determine the relationship between patient sex and BMI on anterior positioning of the acutely inflamed appendix in relation to the psoas muscle. Methods: We performed a retrospective chart review on all patients with admission diagnoses of appendicitis at a university tertiary referral center between 2009 and 2011. 621 patient records were analyzed and 450 patient records were included in the final analysis. Excluded records did not have documented CT scan images and/or BMI data. The last CT scan prior to surgery was analyzed for location of the appendix. Patient sex and body mass index (BMI) were recorded from the medical record. Anterior positioning of the appendix was documented if any portion of the appendix crossed anterior to the medial or lateral margin of the psoas on CT scan. Categorical data were analyzed using the Fisher's exact test. Results: When comparing males and females, an anterior position of the appendix in relation to the psoas was present in 67.5% and 54.8% of patients, respectively (p=0.007). With respect to BMI we found an anterior position of the appendix in 77.8% of underweight patients (BMI<18.5, n=27), 52.1% of normal weight patients (BMI 18.5-25, n=144), 61.6% of overweight patients (BMI 25-30, n=138), and 67.4% of obese patients (BMI>30, n=141). This was significant when comparing normal weight subjects to underweight and obese patients. Inter-rater reliability in determining the location of the appendix was good (kappa 0.67). Conclusion: Males were more likely than females to have their appendices located anterior to the psoas muscle. Underweight and obese patients were more likely than normal weight patients to have their appendices located anterior to the psoas muscle. Background: Sickle cell crisis is an intensely painful condition requiring rapid analgesic treatment. Both parenteral opioids and nonsteroidal anti-inflammatory drugs are commonly used to provide relief from pain. The efficacy and safety of intravenous paracetamol have not been evaluated in the management of pain associated with sickle cell crisis. Objectives: This randomized controlled trial was conducted to evaluate the analgesic efficacy and safety of intravenous single-dose paracetamol and morphine for the treatment of acute painful crisis of sickle cell disease. We conducted a randomized, double-blind, placebocontrolled trial comparing single intravenous doses of paracetamol (1 g) and morphine (0.1 mg/kg) for patients presenting to the emergency department (ED) with acute painful crisis of sickle cell disease. A minimum of 48 patients in each group would be required to detect a 2-point difference between groups, assuming an SD of 2 points, 95% power, and a 0.05 two-sided level of significance. Subjects reported pain intensity on both a 100-mm visual analogue scale and a four-point verbal rating scale. Subjects with inadequate pain relief at 30 minutes received rescue morphine (0.1 mg/kg). We compared to changes in pain intensity 30 minutes after treatment, as well as the need for rescue medication and the presence of adverse effects. Results: One hundred six adult patients were randomized to treatment, 54 to morphine and 52 to paracetamol. The mean reduction in visual analogue scale pain intensity scores at 30 minutes was 41 mm for paracetamol (95% confidence interval [CI] 32 to 49 mm), and 44 mm for morphine (95% CI 33 to 56 mm). Statistically significant mean differences in pain intensity reductions were compared, and no difference was found between paracetamol and morphine (3; 95% CI -10 to 18; p=0.72). Rescue analgesics at 30 minutes were required by 24 subjects (46%) receiving paracetamol, and 27 subjects (50%) receiving morphine. Adverse effects were experienced by 3 (5%) receiving paracetamol, and 5 (10%) receiving morphine. There were no serious adverse events. Conclusion: Intravenous paracetamol is an efficacious and safe treatment for ED patients with acute painful crisis of sickle cell disease. The Background: Ketamine has pain control properties and at lower doses it retains much of these properties without eliciting the emergence phenomenon. Ketamine has been utilized in surgical and oncology patients as an adjuvant to opioids for pain control. Sickle cell disease pain can be difficult to treat adequately. Pain secondary to vasoocclusive episodes (VOE) may be refractory to high-dose intravenous opioids. Alternative treatments for VOE in the emergency department (ED) are needed. Providing safe, cost-effective pain control may also improve patient satisfaction and ED length of stay. Objectives: To determine the effectiveness of low-dose ketamine as an adjuvant to hydromorphone in relieving pain of VOE in an ED setting. Methods: This pilot study was a randomized, prospective, doubleblinded trial. Both groups received hydromorphone 2 mg IV as initial therapy and an additional 2 mg dose 15 minutes later. The control group then received the normal saline placebo with their second dose of hydromorphone, while the experimental group received ketamine 0.2 mg/kg IV with their second dose of hydromorphone. Visual analog scale (VAS) pain scores from 1 to 10, with 10 being most severe, were recorded on arrival, after ketamine or placebo, and at disposition. Results: Data were obtained from a convenience sample of patients from June 2011 to Oct 2012. The mean age was 29.9 years and 64.86% were male. Seventeen patients received ketamine and 20 patients received the placebo. The mean arrival VAS pain score was 8.7 (95% CI=8.07 to 9.29) in the ketamine group and 8.5 (95% CI=7.90 to 9.05) in the placebo group. The mean VAS score was 6.0 (95% CI=4.71 to 7.29) after the administration of ketamine, a 31.0% decrease, and 5.2 (95% CI=4.01 to 6.46), at disposition, a 40.2% decrease from arrival. The mean VAS score after the placebo was given was 7.0 (95% CI=6.20 to 7.85), a 17.6 % decrease from arrival and 5.6 (95% CI=4.27 to 6.93), at disposition, a 34.1 % decrease from arrival. The ketamine and placebo groups had similar mean VAS scores at baseline; however, the ketamine group had a lower VAS score after ketamine was given and at disposition. Ketamine may be an effective adjuvant to hydromorphone in controlling pain associated with VOE. Objectives: To determine the variation in ED evaluation, care, and admission rates for pediatric patients with SCD and fever. System (PHIS) 2010 database of patients, 2 months to 18 years, with a diagnosis of SCD and fever initially evaluated in the ED. Frequencies of care (antibiotics, hematologic labs, microbiologic testing and chest radiographs (CXR)) and admission rates were evaluated. Care documented within the first 2 days of the encounter was used to capture ED care. Adjusted hospitalspecific admission rates were calculated using generalized linear mixed effects models, controlling for hospital clustering and allowing for the presence of correlated data (within hospitals), non-constant variability (across hospitals), and non-normally distributed responses. Results: 4961 patient encounters met inclusion criteria. There was no significant variation in antibiotics, laboratory testing or CXRs across hospitals. There was significant variation in admission rates across hospitals (figure). In hospital-adjusted multivariable modeling, patients 1yr-13 yrs or with commercial insurance has less likelihood of hospital admission (table) . Methods: An attitudes survey, previously validated in a sample of medical providers, was administered to a convenience sample of ED providers at two North Carolina EDs in Nov and Dec 2011. The survey assessed provider perception of and satisfaction in caring for SCD patients and also gathered responses to the Medical Condition Regard Scale (MCRS). Principal factor analysis was performed to identify underlying subscales of the survey. Subscales, constructed by summing items with factor loadings of AE0.40 or greater, were linear transformed onto a 0-100 scale. Provider types were compared using analysis of covariance, adjusting for years of practice. To assess construct validity of the subscales in the ED setting, Partial Spearman correlations were conducted to examine the relation between the subscales and MCRS total scores. Background: Patients with urolithiasis generally follow a benign clinical course and only require symptomatic management and will pass their stones spontaneously. However, a minority of patients will require urologic intervention. Several previous studies have attempted to identify predictors of urologic intervention. Objectives: The objectives of this study were to confirm previously reported risk factors and to identify any other predictors of urologic intervention within 90 days for patients who present to the ED with suspected renal colic. Methods: This was a prospective cohort study of adult patients presenting to one of two tertiary-care EDs with suspected renal colic over a 20-month period. Electronic charts were reviewed 90 days after the initial ED visits to determine if urologic intervention was required. Backwards stepwise multivariable logistic regression models determined predictor variables independently associated with urologic intervention. Previous studies have demonstrated that emergency medicine (EM) residents working shorter shifts treated more patients per hour than those working 12-hour shifts. The effect on overall emergency department efficiency when EM residents begin working shorter shifts has not been studied. Objectives: The purpose of this study was to determine if the increase in resident efficiency due to shorter shifts translated into increased overall emergency department efficiency. Methods: This is a retrospective chart review of patients seen in the ED of a large teaching hospital during two 3-month intervals. The first phase occurred from July to September in 2011, and the second phase Shorter upper-level resident shift length appears to correlate with shorter length of stay and fewer patients leaving without being seen. While many variables could have affected the emergency department efficiency during these two study periods, the trend in this emergency department is that efficiency was improved with shorter resident shifts. satisfaction survey questions regarding teamwork (fourth quarters 2010 and 2011) were analyzed using two-sample t-tests. Objectives: To derive a list of system practices to minimize the transfer time to an SRC. We performed a three-round modified Delphi study. A comprehensive literature review was used to identify candidate system practices. Emergency medical services, emergency medicine, and cardiology "experts" who authored relevant published studies and/or served as panelists on relevant regional committees were invited to participate. Consensus was defined as 80% agreement that a variable was "very important (5)" or "important (4)" with a mean ! 4.25 OR 80% agreement that a variable was "not important (1)" or "somewhat important (2)" with a mean 1.75. In Round 1, participants rated the candidate items using the scale above (including "important (3)") and were invited to suggest additional items. Individual feedback was provided, and participants discussed non-consensus and additional items via conference calls. In Round 2, participants rated the items using the same scale. In Round 3, participants ranked the consensus items from Rounds 1-2 from most to least important, and the summary score for each item was calculated. Descriptive statistics are presented. Results: Ninety-eight experts were invited to contribute; 29 participated in Round 1, 22 in Round 2, and 14 in Round 3. Fifty-one total items were evaluated in Rounds 1-2. Consensus was achieved on 12 items in Round 1 and six additional items in Round 2. The most important system practices in Round 3 were prehospital providers performing aelectrocardiograms and referring hospitals and SRCs having established transfer protocols. Conclusion: Expert participants identified 18 system practices that are critical in minimizing transfer time to SRCs. These factors should be considered in the development of STEMI systems of care. Effect of a Dedicated Emergency Department Pharmacist on Antibiotic Administration Times in Sepsis Robert Graham 1 , Thomas Payton 2 , Jason Thompson 1 , Breanne Nestor 1 , and Valerie Williams 1 1 Geisinger Medical Center, Danville, PA; 2 University of Florida, Gainesville, FL Background: Sepsis is a leading cause of morbidity and mortality worldwide, and early antibiotic administration has been shown to improve outcomes. One factor in decreasing these times that has not been studied is the presence of a dedicated ED pharmacist. Objectives: The primary outcome of this study is to examine the effect a dedicated ED pharmacist has on antibiotic administration times in septic patients by comparing times from physician order to administration with and without an ED pharmacist on duty. Methods: A retrospective chart review of 678 septic patients between January 2010 and July 2012 in an academic, tertiary-care ED was conducted. Included subjects each had a "sepsis alert" called by a physician, received an antibiotic in the ED, and were admitted to the hospital. Subjects requiring vascular access procedures or antibiotics not stored in the ED were excluded. The times of physician order, pharmacist verification, and medication administration were collected. Two cohorts were created, either with or without a pharmacist on duty. A robust multivariate regression model was utilized to determine significance in times between the two groups, controlling for nursing workload and total ED census. Background: Optimizing ED operations would ideally be based on a computational analysis that highlights key metrics, thus providing the means for measuring administrative changes. In the academic setting, improving efficiency and patient experience must be balanced with maintaining resident autonomy and educational opportunities. Two of the more important outcomes which have been shown to affect patient satisfaction are average length of stay for admitted patients (ALOS-A) and percent who left without being seen (LWBS). Objectives: The purpose of this study was to determine the factors with high independent correlation to ALOS-A and LWBS. Methods: A retrospective review of operational data for patients seen in the ED was conducted at a university-affiliated urban, Level I trauma center, over 92 days. The EDIS was queried to determine daily averages for 40 common ED metrics. The two primary outcomes were ALOS-A and LWBS. Spearman rho correlations were applied to determine the metrics highly correlated with these outcomes. A step-wise multiple regression analysis was then done using these identified metrics. Results: During the study period, 1243 patients (3.6%) LWBS, 8060 (23%) were admitted, and the total volume was 34,923 patients. The metrics with the highest correlation, and the most significant independence, with ALOS-A and LWBS are shown in the table below. Objectives: The objective of this study was to determine if patients in the ED have a preference how physicians address them, whether by a formal name or by first name. We also explored whether a physician's apparent age effects how patients prefer to be called. Methods: In a teaching urban tertiary ED (115,000+ annual visits), surveys were handed out to a convenience sample of adult patients at the end of their ED stays, regardless of admission or discharge. Trained research assistants handed out surveys that asked patients their preference to how the physician addressed them, as well as demographic information. Patients older than 18 years were eligible. Exclusion criteria were intubation, altered mental status, and trauma resuscitation. The comparisons of patient preferences against patient age, race, and sex were analyzed with chi-square tests, and frequencies are reported. Objectives: To describe and compare features of two methods of advanced IV access: peripheral ultrasound-guided (PUG) and external jugular (EJ). Methods: In this prospective cohort study, we enrolled patients in an urban tertiary care ED setting who had failed IV access by inspection and palpation and were able to provide written informed consent. We defined an attempt as a provider using one or more skin punctures at a given location. We collected information about IV attempts, including provider background, method, duration of provider effort, and pain scores. We followed IV lines until patient discharge or transfer to an inpatient ward. We defined initial success as establishing an IV at the site of the attempt and failure as infiltration or other abandonment of a previously established IV. We calculated confidence intervals using the t distribution or a normal binomial approximation, as appropriate. Methods: An assessment survey was developed and 16 hospitals in California were asked to complete the survey every 4 hours between 4/ 6/11 and 5/1/11. Twelve variables relating to counts and times in the ED were collected every four hours. ED physicians and ED charge nurses assessed overcrowding on a 100 mm visual analogue scale (VAS) ranging from no crowding to severely overcrowded. Variables were compared to the VAS results using a Pearson correlation coefficient with a p<0.01 being significant. Medians and IQRs were calculated. Results: The database represented 2006 survey times, 126 times for each of the participating hospitals. Sixteen hospitals collected data, but three hospitals were excluded in the analysis for incomplete datasets. 1628 timed survey entries for the 13 hospitals were included in the study. EDs ranged from 18k to 67k ED visits, 62 to 423 acute inpatient beds and 5k to 18k annual admissions. Seven of 13 (54%) were Level II trauma centers and 11 (85%) were base hospitals. Five variables correlated with the overcrowding scale at a value of >0.25 (see table) . Conclusion: For the included hospitals, multiple variables were highly correlated with ED overcrowding. These results could lead to a new overcrowding scale for community EDs. Objectives: To determine the accuracy of ED medication reconciliation using laboratory analysis of urine samples. The hypothesis is that medication reconciliation is accurate. Methods: Prospective non-blinded observational study performed at an urban tertiary care university hospital. Convenience sampling of subjects who underwent ED triage from June -September 2012 was done. Inclusion criteria: subjects 18 years or older who were capable of providing informed consent and reported use of at least one medication. Exclusion criteria: prisoners, no reported medication use, and no urine sample. ED triage medications in the electronic medical record (EMR) were recorded. Subjects provided a urine sample. Liquid chromatography-mass spectrometry/mass spectrometry and liquid chromatography-time of flight/mass spectrometry laboratory analysis were performed on urine samples for a pre-selected sample of 205 analytes. Triage medications were limited to include medications tested for by these analytes. Medications administered in the ED and resulted in the urine were not included in analysis. For each subject, consistency between triage medication list and urine analysis was determined. For each medication, analysis was performed to define the percentage of subjects who 1) did not report the medication on the EMR but had positive urine results (occult medication use), and who 2) reported the medication on the EMR but had negative urine results (non-compliant medication use). Medications were grouped by drug class. 95% CI were calculated. Results: One hundred subjects were enrolled; 21 were excluded, and 1 withdrew. The mean age was 51 years, and 54 subjects were male. 66/ 78 subjects had included medications on EMR. 72/78 subjects had analytes detected in urine. 21 drug classes were identified. No subject had 100% consistency of EMR medication list and urine results. Opioids, benzodiazepines, cardiac, non-opioid pain, and psychiatric medications were frequently identified in occult and non-compliant medication use (table) . Conclusion: ED triage medication reconciliation is not accurate. Primary drug classes contributing to this inaccuracy are opioids, benzodiazepines, cardiac, non-opioid pain, and psychiatric medications. Background: Australia is gradually introducing a National Emergency Access Target (NEAT) requiring all hospitals to achieve total ED times of less than 4 hours for 90% of patients. Jurisdictions have taken different approaches to implementation, with some undertaking hospital-wide re-engineering of processes and others introducing limited change. Objectives: To describe the initial effects of a suite of new practices aimed at achieving NEAT in a hospital facing increasing demand. Methods: Prospective descriptive cohort study with 5 years of historical controls describing 16 weeks of ED performance from 6-Feb-2012 in a tertiary mixed adult/pediatric ED seeing 66000 annually. Standard ED indicators and 4-hour total ED time (NEAT) were compared. The intervention was a multimodal practice change including "frontloading" of trolley patients by senior staff in a dedicated area, increased observation medicine, further spaces through use of chairs, and better coordination with bed management. These were achieved with relatively little staffing change. Conclusion: In absolute terms, these practice changes markedly improved the productivity of the ED in producing 4-hour outcomes, but this was barely sufficient to overcome the effect of growth. Despite the involvement of bed management in this intervention, it is clear that NEAT will not be achieved unless admission practices change. Objectives: To evaluate the effect of a standardized interservice "handoff card" on information transfer during patient admission from the ED to an inpatient receiving service. We hypothesize that use of the card would improve completeness of communication. Methods: We implemented a standardized written "handoff card" for ED-to-inpatient transfers at an inner-city tertiary academic medical center. We used a survey-based prospective pre-/post-intervention cohort design to evaluate the effect of the handoff card on the adequacy of information exchange. The sample size was determined by a preimplementation power calculation. Our respondents were a convenience sample of third-year internal medicine residents who had received signout on a patient admitted from the ED within the past shift. We used a chi-square analysis to evaluate differences in the proportion of "yes" responses between pre-and post-intervention cohorts. Results: A total of 163 surveys pre-intervention and 150 surveys post-intervention were collected during the study. After implementation of a written handoff card, there were significant differences in the proportion of positive responses for timeliness (100% vs. 96%, p-value 0.02) and the presence of a problem list (87% vs. 72%, p-value <0.001). Similarly, there was a sizable difference in the percentage of inpatient residents who felt they had received adequate information to address anticipated problems (91% vs. 76%, p-value <0.001). Background: For many years now, ED overcrowding has been of concern and studies have tried to point out its various contributing factors. However, the association between the adequacy and effectiveness of pain management and ED length of stay (LOS) has never been studied. Objectives: The present study aims at evaluating the influence of adequate analgesia and the different components of analgesic treatment on LOS. Methods: This is a post hoc analysis of real-time archived data from a computerized medical prescription system and nurses' records used in the ED of an adult tertiary referral center. We included all consecutive ED patients 18 years or older who had a pain intensity of more than 6 (on a verbal numerical scale from 0 to 10), were assigned to an ED bed, and had their pain re-evaluated in the first 2 hours. The main outcome was ED LOS from arrival to discharge in patients who had adequate pain relief (AR) defined as pain intensity at 50% or less of their initial level in the first 2 hours compared to those who didn't have such a relief (NR). We controlled for age, sex, acuity of triage, number of patients in the ED, number of investigations, number of consultations, trauma, need for oxygen, type of analgesic, route of administration and delay before administration. Secondarily, patients being admitted were studied in the same manner. We used independent t-tests, ANOVAs, and regression models where appropriate. Results: A total of 2,974 patients had a re-evaluation of their pain in the first 2 hours, 1,873 were discharged and 1,101 admitted. Among patients discharged or admitted, there was no significant difference in ED LOS between patients with AR (median ED LOS of 8.3 hours, 16.5 hours for admitted patients) compared to those with NR (8.2 hours, 16.2 hours for admitted patients). Secondary analysis shows that for analgesics, only a short delay before administration is associated with a shorter ED LOS, with a reduction in median LOS of 1.6 hours. This translates to a potential saving of 1.5 to 2 beds per day if all patients with a severe pain were treated expeditiously with analgesia given in less than 1.5 hour. Conclusion: In other studies, adequate pain relief has been associated with quality of care; however, in our study it was not associated with shorter ED LOS. A shorter delay before an analgesic was the only variable linked to pain associated with shorter ED LOS. Methods: Arena discrete event simulation of a previously described model of emergency care was utilized (Ann Emerg Med 2010;56:S120). The model included five levels of patient acuity (AL), exponential interarrival times, arrival rates from 1 to 16 patients/hour, acuity-specific processes of care, process-specific resource utilization, clinically derived triangular time distributions for each step, and adjustable acuity mix. Objectives: We sought to evaluate the effect on department efficiency of transitioning from a system with physician teams (attending and residents) designated to specific zones to a dezoned system where physician teams are able to care for patients anywhere within the department. We hypothesized the dezoning of physician teams would decrease LOS and time to be seen by a doctor. Methods: Data were obtained between 7/1/11 and 10/31/11 prior to the introduction of the dezoning system, and from 7/1/12 to 10/31/12 after establishment of the new system. The old system had two zones between the hours of 8 am and 2 am with 26 attending coverage hours and 37 resident hours in one zone and 25 attending and 73 resident hours in the other. The new system had one zone with 49 attending and 111 resident hours. We measured patient volume and disposition as well as efficiency measures, including patient length of stay and time to be seen by doctor. Setting: Urban tertiary care university hospital with 50,000 annual ED visits. Design: Pre-post cohort study. Results: There were 19,264 ED visits in the pre-intervention period. Methods: This is a retrospective observational study of patient satisfaction data from patients seen at a single, urban, communitybased Level I adult and pediatric trauma center with an emergency medicine residency program. As part of a performance improvement initiative, a third party vendor specializing in emergency medicine patient satisfaction assessment made up to three telephone attempts to contact each patient discharged from the ED between September 1, 2011, and March 31, 2012 . Patients were administered a standardized survey assessing satisfaction with their overall ED experience. Unadjusted ordinal logistic regression was used to compare overall ED satisfaction (1=worst; 5=best) for patients presenting with abdominal pain, dental pain, low back pain, and headache against patient satisfaction scores for all other chief complaints combined. Objectives: We sought to limit the number of alarms in the ED to the most clinically relevant, and compare alarm durations, a surrogate marker of alarm response times, before and after this intervention. Methods: This before-after study was set in an urban, academic ED with an annual census of 100,000 visits. We collected alarm data from 39 adult ED monitors during a 70-day period starting 9/8/2011. We then implemented a package of default monitor-settings changes based on expert consensus of clinical relevance, as well as an education campaign for ED staff. We collected a 70-day postintervention dataset. Student's t-tests were used to assess differences in mean alarm durations before and after the intervention for each alarm type. Statistical significance was pre-determined using twotailed p-values using the Bonferroni correction for multiple comparisons (0.05/8 = 0.006). Results: There were 216,214 alarms in the pre-intervention period and 28,398 alarms post-intervention, an 87% reduction in total alarms. In the pre-intervention dataset, alarm duration ranged from 0 to 150,326 seconds with a mean of 145 seconds (95% CI 138-151, SD 1473) and median of 4 seconds. The most common alarms are listed in Table 1 . Before and after alarm durations are listed in Table 2 . Post-intervention alarm duration significantly increased for hypoxia (p<0.001) measurements, and significantly decreased for lead failures (p<0.001) and hypertension (p<0.002). Conclusion: Monitor alarms were extremely common in this ED during the period studied. Our intervention significantly reduced alarm frequency in our ED but did not consistently reduce alarm duration. Our results are limited by the fact that the duration of some alarm types may have been affected by the settings changes. Future studies should evaluate reductions in alarm response times as a result of decreased alarm frequency. Objectives: The objective was to use a standardized TOC process to decrease the mean number of "missed clinical items" reported following TOC, without increasing the total duration of the sign-out process. We defined a missed clinical item (MCI) as a vital sign, laboratory, radiology and other ancillary data identified after TOC, that was either omitted or inaccurately reported during TOC and altered or delayed the disposition or treatment plan reported at shift change. Methods: A pre-post intervention study design was utilized for this project over a 4-month period at an urban, academic Level I trauma center. The study participants were the EM residents and attending physicians who participated in the TOC. The oncoming EM resident and attending physicians documented the total duration of sign-out and the number of patients being transferred, and logged all MCI on a data form throughout their shifts following each TOC. The pre-intervention phase was the first 2 months while the post-intervention phase was the following 2 months and included the standardized TOC process. The standardized TOC process involved: (1) a group sign-out in a designated location within each care area, (2) a "data resident," responsible for reviewing all orders, vital signs, and lab and radiology results on the EMR for each patient and for controlling the pace of sign-out, and (3) an interruption manager who was responsible for handling any interruptions during sign-out. Results: 75% and 69% data forms were completed for the pre-and post-intervention phase respectively. Total MCI reported was 77 during the pre-intervention phase and 31 during the post-intervention phase. The results are listed in the table. Conclusion: Implementing a standardized TOC process, which includes the designation of a TOC location, an EMR data reviewer, and an interruption manager, may reduce the rate of TOC-related errors that result in disposition or management changes for ED patients. Possible limitations include failure to either recognize or record an MCI resulting in under-reporting. Objectives: Our objectives were: 1) to determine whether patients presenting with chest pain or shortness of breath and triaged as 2 (Emergent) on the Canadian Triage and Acuity Scale (CTAS) are triaged to ambulatory areas of the ED more frequently during crowding and if those patients are seen more quickly than those triaged to the nonambulatory area; 2) to compare the proportion of return ED visits for this population when triaged either to the non-ambulatory area or ambulatory area of the ED. Methods: This study was a retrospective chart review of 394 patients presenting to an urban tertiary care ED with chest pain or shortness of breath and triaged as CTAS 2. Data extracted included triage time and date, time of physician assessment, and disposition. We defined crowding as ED occupancy (the ratio of patients to beds in the ED) greater than 1.5. We analyzed the data with descriptive statistics and chi-square testing. Methods: We performed a mixed-methods quantitative and qualitative study at a 55,000 visits/year Level I trauma center, tertiary academic teaching hospital. Consecutive adult patients between 10/8/ 2011 and 6/23/2012 who presented to the emergency department with an ED ICD-9 CM diagnosis consistent with pneumonia were enrolled. The primary outcome measure was time to antibiotic administration. Before and after groups were divided into equal 115 day blocks with a 30-day washout period. Patients were excluded if they signed out against medical advice (AMA), eloped, or left without being seen (LWBS). Significance testing was performed using a proportional hazards model adjusting for age, emergency severity index, daily census, and disposition. We also collected qualitative data about user perception of CPOE in the ED to better understand the results from the quantitative aspects of the study. Statistical analysis was performed using JMP. Our study protocol was registered on ClinicalTrials.gov #NCT01444768. Objectives: We reviewed the literature to identify the effect of EDbased throughput interventions on the length of stay (LOS) in the ED. We conducted a systematic review using five databases: MEDLINE, CINAHL, Cochrane Library, EMBASE, and Scopus. Date range was not limited. The search terms used were "Emergency Department", "Crowding", "length of stay", and "Intervention". Inclusion criteria: studies have to involve an ED-based intervention designed to improve ED flow; have a comparison between groups; use ED LOS as an outcome; and be in the English language. All studies were reviewed and evaluated by three independent reviewers with disagreement resolved by consensus. Data were independently extracted using a standardized data extraction form Results: From 2235 non-duplicate references identified that underwent screening by reading the titles and abstracts, 43 unique studies were included. Of those, 41 studies were single-center studies. Annual visit volume ranged from 19-87K; 20 studies were time-series, 10 were quasi-experimental before-and-after, 7 were randomized controlled trials, and 3 were case-controlled studies. Three studies used both qualitative and quantitative methods. Of the 43 studies, 9 (21%) revised triage staffing or approach and 6 decreased average LOS ( Objectives: To determine the effect of environmental factors on provider SA. These factors include patient load, patient complexity, and provider experience. Methods: This IRB-approved study was performed at an urban, academic ED over a 6-week period. Physician SA was measured using the Situation Awareness Global Assessment Technique (SAGAT), a validated objective tool that uses 10-question probes. An expert panel developed 158 questions from four broad topics: diagnostic tests, medical intervention, medical history, and management. A trained research assistant followed residents for 6-hour shifts, and once each hour asked 10 randomized questions per patient, stratified by question category. The answers were verified in the patients' medical records. Binary classification statistics were used to analyze SA performance. Results: 183 hours of observation was conducted on 15 providers over a 2-month time period. 8773 questions were asked about 231 patients over the course of 31 observational sessions. 5.5% of questions were answered incorrectly. There was no significant decrease in false response rate for high acuity (ESI level 1 or 2) patients. In addition, provider experience did not improve SA. However, chi-square tests between the question categories showed a statistically significant difference between the diagnostic, history, and medical intervention categories. Conclusion: Residents' responses were stronger in different categories of questions, but awareness of their patients did not vary by patient acuity or training year. Further research is required to investigate associations between poor SA and total ED volume and patient load. Methods: This project is a retrospective analysis of 17 emergency physicians from our urban academic emergency department. Monthly individual critical care rates were calculated retrospectively for two months. After this two month period, each participating attending physician took an online quiz consisting of 30 questions. After this quiz physicians viewed an OpenOffice TM presentation, containing didactic slides pertaining to the appropriateness and application of critical care coding. This presentation included a number of clinically relevant examples with detailed explanations. After completing the lecture, they took the same quiz again. Critical care rates per physician were compared from the two calendar months prior to and two calendar months after the intervention, with estimation of subsequent revenue increases. Correct response rates on the quiz were measured and compared as well. Results: The quiz scores increased from 78.66% pre-training to 92.23% post-training. The largest increase was seen in questions pertaining to which patients qualify as "sick enough." The average critical care rate increased from 1.26% during the "before" two-month periods, to 3.20% during the "after" two-month period -a relative increase of 153%. Assuming a $35 per RVU reimbursement (Medicare average), we estimate an annual revenue increase of approximately $87,000 based on our patient volume. Objectives: Evaluate the effect of introducing flexible partitioning between low-and high-acuity ED areas on wait time and resource use. Methods: A discrete event simulation was built to model patient flow through a 50-bed urban teaching ED that sees 90,000 patients visits annually. Ten beds were initially assigned to low-acuity patients only. The effect on wait times and bed utilization resulting from switching up to 5 of these beds to flexible use for low-or high-acuity patients was tested. For flexible beds, low-acuity patients were given priority when the regular 40 beds for higher-acuity patients were not at capacity. Otherwise, high-acuity patients were given priority. The model parameters were estimated from administrative ED data for 7/2010 to 5/ 2011. Patient acuity was based on the emergency severity index (ESI). Arrival rates for patients with varying acuity were estimated for hour of the day and day of the week. Process times were estimated for patients with varying acuity given their disposition. The model was a steadystate simulation with a warm-up period of 30 days, a replication length of 365 days, and 300 replications. Conclusion: Introducing flexibility into bed assignment between high-and low-acuity patients provides substantial operational benefits for overall ED performance including lower wait times and more even bed utilization. In this situation, rigid separation of resources decreases operational effectiveness. Objectives: This study aims to investigate the relationship between CDU diagnoses and rates of inpatient admission following CDU observation. We hypothesize that specific diagnoses will have a greater risk of observation failure (OF). Methods: This was a prospective, observational, consecutive sampling, cohort study of patients admitted to a 24-hour, 12-bed CDU from 08/23/11 to 04/01/12 at a suburban, academic emergency department (ED) with 80,000 annual visits. Observation failures (OFs) were defined as patients admitted to the inpatient floor, operating room, or cardiac catheterization lab following observation. Subjects who left against medical advice were excluded. Logistic regression was used to model OF as a function of diagnosis. Subjects with asthma, cellulitis, chest pain, abdominal pain, and the remaining cohort of diagnoses within the ICD-9 category of signs, symptoms, and ill-defined conditions (RSSI) were included. Results: There were 2429 subjects. Of the patients with asthma, cellulitis, chest pain, abdominal pain, or the RSSI (n = 1480), 49.3% of subjects were diagnosed with chest pain (n = 730), 28.9% with the RSSI (n = 427), 10.4% with abdominal pain (n = 154), 8.1% with cellulitis (n = 120), and 3.3% with asthma (n = 49). Subjects with cellulitis were more than three times as likely, and subjects with asthma were more than twice as likely, to experience OFs when compared to those with the RSSI ( Objectives: The purpose of this study is to assess whether the engagement of a patient advocate during the ED discharge process results in a sustained improvement in patient satisfaction scores at 6 weeks post-discharge as reported by Press Ganey surveys. Methods: This was a prospective, cluster-randomized trial conducted in a suburban ED between 10/17/11-3/23/12. Eligible study participants included all patients ! 18 years discharged from the ED during regular business hours. Study days were randomized to intervention (patient advocate present) or control using a permuted block design. On intervention days, eligible patients were approached by research interns during the discharge process with a five-question survey to assess readiness to be discharged. On control days, patients were not approached by patient advocates. All patients received 48hour post-discharge phone surveys to assess communication in the ED, overall satisfaction, readiness of discharge, safety, and understanding of discharge instructions. Mean scores were calculated and a mixed model with an unstructured covariance structure was used to compare scores between intervention and control groups. Press Ganey scores for Doctor/Overall Assessment categories were retrospectively obtained for all study days. Mean scores were calculated for intervention and control days in each of the 18 categories and compared using the Mann-Whitney Test. The study enrolled a total of 335 participants (215 control and 120 intervention). At 48 hours, participants in the intervention group had significantly higher (0.46 points) mean satisfaction scores as compared to those in the control group (P < 0.0039). There were no significant differences in scores between intervention and control groups on the 6-week Press Ganey survey. Methods: The injury severity rates for bicyclists injured in motor vehicle crashes while riding in bike lanes/shoulders were compared to those riding in traffic lanes. Other factors including alcohol involvement, speed, helmet use, and light conditions were also investigated to determine their effects on bicyclist safety as well. NASS-GES data from 2010 were used to analyze data regarding the bike lanes and data from 2002-2010 were used to analyze the additional factors. Univariate and multiple regression analysis controlling for confounders was performed on the data. Results: When adjusting for the road's speed limit, alcohol use (driver), weather, time of day, and helmet use, the cyclist's position has no significant effect on injury severity (p = 0.57). Injury severity was significantly greater when the driver or bicyclist had been drinking alcohol (p=0.003 and p<0.0001 respectively). Vehicles were traveling at a significantly higher speed when bicyclists were severely injured (p<0.0001). Also, Injury severity is significantly higher on roads that have a statistically higher posted speed limit (p<0.0001). Injury severity was more severe when light conditions were "dark" (p<0.0001). Conclusion: These findings suggest that simply having a dedicated space for bicyclist such as a bike lane or a paved shoulder is not significant in reducing injury severity in those injured in bicycle crashes. The results suggest that bike safety may be improved by implementing changes that affect vehicle speed, driver alcohol use, and lighting. The Objectives: To examine the relative prevalence of injury versus illness in children with or without the diagnosis of ADD/ADHD presenting for treatment to Pediatric Emergency Department (PEDs) and to examine the prevalence of adherence to pharmacologic treatment in this population as well as the relative effect of adherence to the likelihood of injury. Methods: The data were collected from a convenience sample of English-speaking children ages 8-17 years seeking treatment in PEDs for injury or illness. Trained research assistants obtained parental consent and child assent during the busiest 16 hours of every day. The participants completed a brief instrument survey determining age, sex, reason for ED visit, diagnosis of ADD/ADHD, specific pharmacologic treatment, and adherence on the day of presentation. Data were entered into Excel and analyzed using STATA. Results: Of 811 patients completing the study, 152 (18.7%) had ADD/ADHD and 319 (39.3%) were injured. Children with ADD/ADHD were no more likely to present with injury vs illness than children without the diagnosis (37.5% vs. 39.5%, p=0.6). Ninety boys (59.2%) and 62 girls (40.8%) had ADD/ADHD but there was no relationship between the age or sex of participants with ADD/ADHD or adherence to treatment and the likelihood of injury vs. illness (p=0.57, p=0.27, p=0.75 respectively). Boys age 12-14 with ADD/ADHD were more likely to be injured (p<0.059). There was no relationship between the sex of the patient and likelihood of adherence (p= 0.176). However, children age 15-18 were significantly less likely to be adherent to medication when compared to children 8-11 (p<0.0001) or 12-14 (p<0.0001). Conclusion: This study demonstrated that children presenting to the PED with diagnosed ADD/ ADHD do not have a higher prevalence of injury vs. illness than children without ADD/ADHD. Among children with ADD/ADHD, there was no significant relationship between medication adherence and injury, but boys age 12-14 were more likely to be injured than other children with ADD/ ADHD. This demographic sub-group may benefit from a targeted injury prevention program. Background: Nationally, bicycle-related injury rates are falling. However, it is unclear if there has been a difference in urban versus non-urban bicycle-related injuries. Many cities have made special efforts to encourage bicycle use by implementing bikeshare programs, improving bicycle-related infrastructure, and promoting bicycle use for recreation and exercise. With increasing ridership, there is the potential for more injuries, but the changes in bicycle-related injuries in urban areas among adults and children have not been well described. Objectives: To determine the changes in ED visits for bicyclerelated injuries in adults (age>16) Methods: This randomized prospective study was conducted at an urban Level I pediatric trauma center. The study population consisted of a convenience sample of the primary caregivers of asthma patients presenting to the ED during the hours of 8am to 5pm M-F. Inclusion criteria were: age 2-17 years, English literate, previous diagnosis of asthma or reactive airways disease presenting with asthma symptoms accompanied by the primary caregiver. Exclusion criteria were: diagnosis of acute pneumonia or structural lung disease. Trained investigators obtained demographic information, then completed an oral 20-point pretest. Subjects were randomized to watch a 7-minute asthma video (V) or to read the standard asthma pamphlet (P). The primary outcome measure was score improvement on a 20-point posttest following the educational intervention. Subjects were monitored for 30-day ED revisits. Statistical analysis was done using paired t-tests to compare the difference in test score improvement for each group. Results: A total of 29 subjects were enrolled during a 3-month period: 15 to V, 14 to P. Average patient age was 5. Objectives: To explore, characterize, and contextualize VIAP clients' challenges to physical and emotional healing post-injury, life circumstances, and services provided by VIAP to better understand and optimize service delivery for victims of violence. Methods: This was a qualitative study of VIAP clients age > 18, who presented to the ED from 7/1/11 to 6/30/12. A random list of eligible clients was generated. A trained non-VIAP qualitative interviewer obtained consent and conducted 20 in-depth, semi-structured interviews based on feasibility. Interviews were audio taped, transcribed, deidentified, coded, and analyzed using NVivo 10. Thematic content analysis consistent with grounded theory was used to identify themes related to client challenges, including mental health and life circumstances. Inter-rater agreement was calculated to assess consistency of coding. Results: Twenty subjects were interviewed for the study. Of these, 14 (70%) were male and 15 (75%) African American, reflecting the overall VIAP clientele. Agreement among coders was excellent (kappa>0.90). Major challenges to physical and emotional healing were: fear and safety 16/20 (80%), trust 13/20 (65%), isolation as a coping mechanism 12/20 (60%), bitterness 11/20 (55%), and symptoms of PTSD 9/20 (45%). VIAP addressed these challenges through counseling and support 19/20 (95%), help with education 11/20 (55%), employment 10/20 (50%), and life skills 9/20 (45%). Over half of subjects (11/20, 55%) expressed feelings of retaliation immediately after injury; 10/20 (50%) spoke about these feelings with VIAP advocate or another caring adult. Ultimately, 18/20 (90%) did not retaliate. Conclusion: Mental health and life circumstances are interconnected to common challenges to physical and emotional healing faced by victims of violence. Understanding these challenges in order to provide Trauma Informed Care is essential to optimize service delivery in violence intervention programs. Empowering individuals to return and be productive in their communities requires support, skills, services, and opportunities. Objectives: To assess the effects of acculturation and parent connectedness on behaviors increasing the risk of crash injury in Latino adolescent males. Methods: From 10/2011 -10/2012 we prospectively administered a validated acculturation measure, coupled with a youth health behavior risky survey, to northeastern urban Latino adolescent males between the ages of 15-18 years. Participants were asked questions about attitudes toward family and culture, engagement in crash-injury risk behaviors (restraint use, riding with an impaired driver, impaired driving), and use of drugs or alcohol. Univariate regression analysis was performed to assess the effects of acculturation and low parental connectedness on engagement with these behaviors. We enrolled 138 participants with an overall mean age of 16.9 years. Males of Puerto Rican descent represented the largest portion of the sample (59.1%) and although the majority of participants were US-born (61.6%), most had at least one parent born abroad (84.1%). Very few adolescents had driver's licenses or permits (10.1%) and many reported unlicensed driving (55.1%). We found that increasing acculturation was a good predictor of marijuana use, predicting lifetime use (p <0.001), age of initiation (p=0.021), and recent use (p=0.011). Low parental connectedness had significant associations with reports of having been in a motor vehicle crash (p=0.014), cigarette smoking (p=0.003), and having recently started smoking (p<0.001). Low parental connectedness was also a good predictor of binge drinking behavior, predicting both occasions of having 5+ drinks (p=0.006) and times drunk (p=0.024) in the past 12 months. Conclusion: Our study shows strong relationships between parent connectedness, acculturation, and behaviors that place these teens at risk for MVCs. These findings may help to explain current disparities in Latino teen male MVC-related mortality. Objectives: To compare self-reported health risk behavior with a validated metric of risk-taking propensity (Balloon Analog Risk Task (BART)) in adolescents during a clinical encounter. We conducted a prospective observational study (3/2011 -5/2012 ) enrolling adolescent patients (14-18 yrs.) from a large urban university tertiary care hospital ED and adolescent health clinic (AC). Participants completed a computer-based survey of self-reported health risk behaviors including motor vehicle occupant and driver behaviors and substance (alcohol, drug, and tobacco) use. They then completed the BART, a validated laboratory-based risk task where participants earn points by pumping up a computer-generated balloon with greater pumps leading to increased risk of balloon explosion. The mean number of pumps across balloons (mean number of "risks" taken) predicts realworld risk-taking behavior in adolescents. Results: One hundred teens (mean age 15.9) from the ED (n=58, 29 males) and AC (n=42, 20 males) were enrolled. 31% of teens admitted to ever driving unlicensed. Mean number of pumps on the BART showed a correlation of 0.243 (p=0.015) with self-reported risky driving/riding behavior (restraint non-use, driving unlicensed, driving impaired or while using a cell phone, riding with impaired driver) and risky attitudes towards driving (dislike of restraint use and traffic safety laws). This correlation remained significant when averaging number of pumps from the first 10 balloons (0.272, p = 0.007). Self-reported substance use was not predicted by average pumps (0.006, p = 0.954). Enrollment location, sex, race, and ethnicity had no significant effect on correlations. The BART is a promising correlate of real-world risk behavior related to traffic safety. It remains a valid predictor of behaviors influencing injury risk when using 10 trials, suggesting its utility as a quick and effective screening measure for busy clinical environments. This tool may be an important link to future interventions for those most at-risk for future injury events. Results: Higher BMI increased LE/UE injuries in frontal crashes, increased thorax injuries in nearside crashes, and decreased head injuries in nearside crashes. Older age increased head and thorax injuries for all crash modes. Increasing age also increased spine and UE injuries in frontal and rollover accidents as well as increased abdominal and LE injuries in frontal impacts. Male sex decreased head injuries in nearside and farside crashes and decreased thorax, and UE/LE injuries in frontal crashes. Results (figure) indicate age provides the greatest relative contribution to injury when compared to sex and BMI. Mitigating the relative effects of age, BMI, and sex would likely have the greatest effect on elderly occupant injuries, specifically thorax and head injuries. Objectives: To describe distractions of those who drive child passengers and to examine associations between distracted driving and child passenger restraint in accordance with Michigan law and driver prior involvement in MVCs. Methods: Cross-sectional survey of parents and caregivers of children Prior research suggests that certain consumer home products such as such as toilets, showers, beds, and stairs are more likely to lead to severe falls and injuries. No systematic population-based analysis of the contribution of these products and associated fall mechanisms for fallrelated injuries in the US has been conducted. Objectives: To identify the mechanisms responsible for falls in the elderly resulting in severe injuries among the most common fall-related consumer products in the home. Surveillance System (NEISS) database. Inclusion Criteria: Patients aged 65+ years presenting to US emergency departments with serious injuries (admitted, held for observation, or transferred to another hospital) following a home fall related to consumer products during 2010. All serious fall-related injuries were stratified by consumer product. A rubric was developed to code mechanisms by which falls occurred for each product and was then applied to the entire sample of patients who met inclusion criteria. National estimates and confidence intervals were calculated accounting for the complex survey sampling of the NEISS. Results: A national estimate of 169,030 serious home fall-related serious injuries occurred in 2010 (see table) . The top three products related to serious injuries were beds (21.1%), stairs (17.8%), and walkers/canes (11.3%). The major mechanisms for falls were as follows: Beds: unintentional rolling (awake or asleep) out of bed (60%), getting in/out of bed (25%), dizziness or syncope (9%); Stairs: walking up/down stairs, no other mechanism stated (92%), dizziness or syncope (6%); Walkers/canes: walking with walker/cane and fell, no other mechanism stated (75%), reaching/leaning for object (8%), walker/cane caught on object/ground (7%), dizziness or syncope (10%). Conclusion: This is the first population-based study to identify the mechanism of injury for home falls that result in serious injuries in the elderly (age 65+). Identification of potentially modifiable risk factors may serve as the basis for interventions to reduce fall-related injuries in a vulnerable segment of the US population. Results: There were 83,251 vehicles of any type involved in head-on crashes in the database. In head-on crashes where the front driver passenger car crash rating was superior to the SUV's front driver crash rating, the odds of death was 4.03 times higher for the driver of the passenger car (95% CI: 3.04 -5.35). Ignoring crash ratings, the odds of death was 7.59 times higher for the car driver than the SUV driver in all head-on crashes (95% CI: 6. 75 -8.52 million annual US runners, occurring in up to 39% of marathoners and representing more than 70% of medical visits in ultra-marathons. No prospective randomized trial has examined blister prevention in this high-risk population. Objectives: The goal of the study was to determine if anecdotally effective paper-tape could prevent foot blisters and hot spots in ultramarathon runners. Paper-tape was applied by medical staff to each toe, forefoot, and heel of one randomly selected foot of a volunteer racer, with the untreated foot serving as control; study endpoint was hot spot or blister development on any location of either foot. A sample size of 30 compliant subjects was necessary to detect a 25% reduction in blister incidence, with analysis by chi-square test and independent samples ttest. Conclusion: This was the first study examining the use of papertape for blister prevention, as well the first ever blister prevention study during an ultra-marathon, confirming the ubiquity of foot blisters. While paper-tape was not found to be significantly protective, a type II error may have occurred. Paper-tape was well tolerated and a trend towards significance was observed at the covered high friction areas of the foot. A Objectives: To address these concerns, we sought to perform a systematic review and meta-analysis of tobacco cessation interventions initiated in the emergency department (ED) with regard to their effect on smoking cessation, all-cause mortality, patient satisfaction, clinician time spent, non-clinician time spent, and cost per quit. We conducted an electronic search of the MEDLINE and CINAHL databases through June 7, 2012 and hand searched references from potentially relevant articles. We selected original studies that reported on evaluations of smoking cessation interventions performed or initiated in adult EDs. Two investigators identified eligible studies, evaluated validity, and extracted data. The lack of a homogeneous control group prevented data normalization; however, all-method cessation probability and intervention comparative effectiveness were evaluated using data from the National Health Interview Survey (NHIS). Results: Eleven studies underwent critical appraisal, with seven included in qualitative synthesis and five included in meta-analysis. When combined in meta-analysis, the all-intervention quit-rate (12%, 95% CI: 10-13%) was significantly higher than the 2010 NHIS average (6%, 95% CI: 5-7%), p < 0.001. All interventions except faxed referrals had significantly improved quit-rates when compared to the NHIS average; all interventions performed significantly better than faxed referrals, but did not differ significantly from each other. ED-initiated tobacco cessation interventions are an effective means of influencing cessation rates. We observed similar cessation efficacy with pamphlet administration, brief advice (BA), and motivational interviewing (MI); however, BA required significantly less time than did MI. In addition, we found that when studied, interventions were wellaccepted by those choosing to receive them. The Objectives: Several cities in the Atlanta metropolitan area recently repealed their Sunday alcohol laws. Our goal is to determine the effect of repealing these laws on the number of emergency department (ED) visits for alcohol withdrawal; this abstract is the first step and describes the number of withdrawal cases presenting to EDs prior to repeal of the law. Methods: This was a retrospective observational study at an academic medical center and its affiliated hospitals. EDs A and B were located in metropolitan Atlanta and ED C was located at a community hospital in a wealthy suburb. Participants were identified by ICD-9 code for having presented to a study ED with the diagnosis of alcohol withdrawal. Participants were then stratified by day of week. A Poisson regression was used to compare the number of participants who presented on each day of the week. Objectives: Our goal is to improve patient satisfaction by providing patients with a patient education and expectation pamphlet (PEEP) prior to evaluation. We also wanted to identify any change in emotion potentially related to the emergency visit and provide a more realistic understanding of the emergency department process. Methods: Design was a prospective, randomized, cohort study of patient satisfaction and emotions. Setting was an academic ED of a regional trauma center. Subjects included a total of 352 patients. Using a 0.05 and a b 0f 0.20 we require and have obtained 176 patients per cohort. The sample size calculations were performed using G* Power 3©. The inclusion criterion was all adult patients admitted or discharged from the ED. Patients were enrolled from May to August 2012. Intervention was to give the patients a PEEP, which described the process of evaluation, diagnosis, treatment, and disposition in the ED. It also explained the unique dynamics of the ED such as the potential for delays from EMS traffic, prolonged hold times, and trauma and critically ill patients. All patients completed a survey either at the time of consent or after reading the pamphlet. Patients were randomized to two cohorts, both of which were given a brief survey after the patient received a room for evaluation that quantified the patient's satisfaction and emotional state by a nominal visual scale. Results: Comparison of patient satisfaction for the PEEP intervention group was 87, and 81 for the control group, p=0.02. The emotions were analyzed using statistical test demonstrated a probability of 0.131 that the emotions are not occurring in a random fashion. In patients receiving the pamphlet 4% were more likely to be accepting during the visit, 1.7% happier. Patients not receiving a pamphlet were more likely to be surprised, angry, or rejected. The use of a patient education and expectation pamphlet is an easily implemented strategy that increases patient's satisfaction. It may also positively influence the patient's emotional state during the visit. Background: Recent epidemiologic studies of out-of-hospital cardiac arrest (OHCA) incidence have used ICD-9 code 427.5 (cardiac arrest) to identify this cohort of patients. However, the use of the ICD-9 code for this purpose has never been validated. Objectives: We sought to validate the use of ICD-9 code 427.5 as a means of identifying OHCA patients. We hypothesized that ICD-9 code 427.5 would underestimate the total volume of OHCA, as patients who have return of spontaneous circulation (ROSC) prior to arrival in the ED may not be accurately coded for cardiac arrest. Methods: This was a retrospective observational study from a single academic institution. Patients were identified via keyword search of the ED electronic medical records between January 2007-July 2012. Keywords searched were "ACLS", "CPR", "PEA", "asystole", "VFIB", "VT", "cardiac arrest", "Epi", "code sheet", "ROSC", "resuscitation" and "AED". Cardiac arrest was confirmed based on standard Utstein definitions documented in the medical record. ICD-9 information and location of ROSC was collected for each patient. We separately searched the electronic medical record during the same study period for patients receiving the ICD-9 code 427.5. The kappa coefficient (k) was calculated to examine the agreement between true arrest and use of the ICD-9 code, as was the sensitivity and specificity of 427.5 at identifying OHCA. Results: The keyword search identified 1717 patients. Chart review confirmed that 385 individuals suffered OHCA and 333 patients were assigned the ICD-9 code 427.5. The agreement between ICD-9 code and cardiac arrest was excellent (j=0.895; 95% CI 0.869 -0.921). The ICD-9 code 427.5 was both specific (99.4%; 95% CI 98.8-99.7%) and sensitive (86.5%; 95% CI 82.7-89.7%). Of the 52 (13.5%) cardiac arrest patients who were not identified by ICD-9 code, 33% (17) had ROSC prior to arrival in the ED. When searching independently on ICD-9 code, we found 347 patients were assigned ICD-9 code 427.5, of whom 320 were known "true" arrests. This yielded a positive predictive value of 92% for ICD-9 code 427.5 in predicting OHCA. Conclusion: ICD-9 code 427.5 is sensitive and specific for identifying ED patients who suffer OHCA with a positive predictive value of 92%; however, there may be a mild underestimation with bias towards excluding patients who have ROSC prior to ED arrival. There is no evidence that ED patients try to contact outpatient doctors first. A recent CDC survey found that lack of access to care and perceived seriousness of illness were the top reasons for patients' ED visits. More data are needed to structure a health care system that will provide greater access to acute unscheduled care. Objectives: By gathering data on patient interactions with outpatient providers and perceptions of their own illness, this study is a first step in analyzing the problem of access to urgent health care which contributes to ED overcrowding. Methods: This is a prospective cross-sectional study of adult ED patients who present to a single tertiary care referral Level I trauma center with 115,000 annual ED visits. Consenting patients or their surrogates in each area of the ED were surveyed during a distribution of 2-hour periods throughout the day and evening 7 days/week. They were asked whether they had attempted to contact an outside provider prior to their ED visit, and if successful, what instructions they received. Those who did not contact an outside provider were asked why they chose the ED as their first source of care. Results: Of our cohort of 476 patients, 88.2% (420) had a primary doctor, and 45.6% (217) of those patients attempted to contact any doctor prior to their ED visit. Of the group that sought outpatient care initially, 81.6% (177/217) were told to come to the ED, or 37.2% (177/ 476) of the total population. For those who were not sent to the ED by another provider, reasons for ED visits are shown in the table (patients could select more than one option). The majority (82.8%) of patients surveyed came to the ED due to either patient or provider perceptions of illness severity or need for specialized services. A recent CDC report found that only 8% of ED visits were nonurgent (should be seen in 2-24 hours). Improving access to primary providers may help reduce ED crowding, but it may also be time for health care systems to provide an alternative form of acute unscheduled care to ease the burden on both primary and emergency care systems. Objectives: We sought to determine the frequency these four medications were administered in the ED and describe demographic differences between medication subgroups. Survey (2006) (2007) (2008) (2009) and identified patient visits by medication codes for the preselected antiemetics. Demographics including age, sex, race, ethnicity, geographic region, and metropolitan statistic area, insurance status and patient acuity level in minutes were analyzed across medication groups. We estimated total number of national ED visits that received each antiemetic in weighted percentages and compared these using chi-square. The Bonferroni correction was used to adjust for multiple comparisons with an adjusted p-value of p<0.001 considered significant. Results: First line antimetic use by medication was (weighted percentage of visits, 95%CI): prochlroperazine (0.6, 0.5-0.7), metoclopramide (1.2, 1-1.3), ondansetron (5, 4.5-5.5), promethazine (4.7, 4.4-5.1). Compared to other antiemetics: prochlorperazine was administered more frequently in younger adults (age 25-34) and in the Midwest, metoclopramide more frequently in black patients and in the Northeast, ondansetron more frequently to insured patients, white patients, and those over age 64 and promethazine more frequently in rural areas and in the South (p<0.001). Conclusion: While many antiemetics are available, these four are administered in over 10% of ED visits. Drug shortages of prochlorperazine may disproportionately affect younger adults (age 25-34) and the Midwest while black patients and the Northeast may be more affected by metoclopramide shortages. Future studies will be needed to identify the effect these shortages have on these identified populations. are routinely treated in trauma and non-trauma centers. Previous research has shown the benefits of trauma centers for the severely injured; however, the relative benefit of trauma centers in treating injuries of lesser severity is not as definitive. Objectives: The objective of this study was to examine differences in mortality and resource utilization (inpatient charges and outpatient procedures) at trauma and non-trauma centers for adults with minor and moderately severe injuries. A cross-sectional, population-based study was conducted for all adult patients (age 18-64) admitted to acute care hospitals in California with diagnosis of an injury using the 2010 patient discharge dataset and the 2010 emergency department dataset from the California Office of Health Planning and Development. Injuries were defined using ICD-9 diagnosis codes and injury severity was calculated using the ICDPIC Stata module. Minor injuries were defined as having an injury severity score less than 5 and moderate injuries between 5 and 15. Transfer patients were not included. Multivariate logistic and linear regression was used to examine differences in mortality, charges, and procedures with adjustments for demographic and clinical characteristics. Results: Among adults with minor or moderate injuries, 292,412 were treated in trauma centers and 857,197 were treated in non-trauma centers (excluding transfers). No significant difference in inpatient mortality between non-trauma centers and trauma centers was found. However, average inpatient charges and outpatient procedures were significantly higher at trauma centers than at non-trauma centers. After adjustment, average inpatient charges for adults with minor or moderate injuries were 13.3% higher (95% CI 11.1% to 15.4%) at Level I trauma centers and 31.1% higher (95% CI 29.1% to 33.1%) at Level II trauma centers compared to non-trauma centers. Level I trauma centers performed 0.79 more outpatient procedures (95% CI 0.77 to 0.82) and Level II trauma centers performed 0.65 more outpatient procedures (95% CI 0.63 to 0.67) than non-trauma centers. Conclusion: For adults with minor and moderate injuries, treatment at trauma centers results in higher inpatient charges and greater outpatient procedure use than treatment at non-trauma centers. Objectives: The objective of this study was to compare current adverse drug reaction (ADR) reporting in patient EMRs against information gathered during patient interviews in the ED. Our hypothesis was ADR reporting in the EMR agrees significantly with the ADR history from interviewing the patient. April 2012 in a Level I trauma center. This was a convenience sample study comparing a prospective patient interview with previously documented ADR histories in the EMR. Sex and age were recorded. Inclusion was all adults with at least one documented ADR in their EMR. Interviews and EMRs were assessed for both the reaction type and the reaction descriptor. Reaction types were allergy, adverse reaction, or none recorded. Reaction descriptors were anaphylaxis, GI symptoms, hives, rash, other, or none. Kappa statistic was used to assess degree of agreement. Kappas were considered significant if p<0.05. Results: There were 101 patients interviewed in this 5-month period. A total of 235 adverse drug reactions (ADRs) were recorded either on the EMR or during the interview. There were 35% males and mean age was 51 AE 17 years. EMR reaction types were allergy (82%), adverse reactions (10%), and other (8%). EMR reaction descriptors were anaphylaxis (8%), GI symptoms (12%), hives/rash (27%), other (15%), and none (40%). Agreement between the EMR and the interview was found in 48% for reaction type and in 54% for reaction descriptor. Kappa values between the EMR and the interview for reaction type (kappa=0.08, p=0.03) and reaction descriptors (kappa=0.45, p<0.01) were both significant. Total profile agreement occurred in only 9 patients. We found significant degrees of agreement between the EMR and direct interviews. However, the degree of agreement for reaction types was slight and for reaction descriptors was fair to good. Even with the use of electronic medical records only about half of ADRs are correctly recorded. Better methods are needed to properly record ADRs to ensure patient safety and care. Objectives: To determine which complaints were associated with the greatest recidivism, and to characterize the patterns of complaint by ED patients who present on multiple occasions. Methods: Data were generated from retrospective medical record review of consecutive ED patients from 1/1/2009 -2/28/2010 using an electronic medical record system from a single, urban, academic, Level I trauma center with over 85,000 ED visits per year. All patients who were seen in the ED on three or more occasions within the study period were included. Chief complaints were divided into 16 categories according to an adaptation of the principal reasons for emergency department visits used by the National Hospital Ambulatory Medical Care Survey. Results: 5,078 patients returned to the emergency department three or more times with documented reasons for visit on each return. Among these patients, the mean number of visits was 4.8 with a range of 2 to 110. The five most common reasons for representation to the ED were cardiac complaints, abdominal pain, shortness of breath, other pain, and psychiatric complaints. Individual patients averaged 145 days between visits, and return reason for visit matched initial reason for visit in 29% of cases. Conclusion: Our data suggest that patients utilizing the emergency department three or more times per year tend to do so at disparate intervals for separate complaints, as opposed to presenting in a cluster of visits for a single complaint. The observed pattern of utilization suggests that total recidivism may be due in large part to an underserved population presenting over long intervals with distinct complaints, even among patients who present three or more times per year Background: Although patients with severe hyperglycemia can be critically ill, many may be safely discharged home. It is common practice to provide insulin and/or IV fluids to lower glucose levels prior to discharge. Actual physician practice for these patients, and the association with ED length of stay (LOS), are unknown. Objectives: To document physician practice of eventually discharged patients with severe hyperglycemia, and determine the association between care received and LOS. Methods: This is a secondary analysis of a retrospective observational cohort study at a high-volume urban Level I trauma center of all ED patients with glucose levels >=400 mg/dL at any point during their visits who were discharged directly from the ED between January 2010 and December 2011. Exclusion criteria included Type 1 diabetes. Arrival and discharge glucose levels, labs ordered, treatments provided, and LOS were recorded. Data were analyzed with chi-square, ANOVA, and multiple regression. Results: 422 patients with 511 ED encounters were identified. See Results: 9759 patients with TIA and minor stroke were included, 47.2% and 52.8% respectively. The discharge rate from ED was 25.5% for stroke and 74.5% for TIA patients. In the overall cohort, increasing degree of crowding was associated with decreased risk of discharge (table) . When stratified by ED annual volumes, increasing ED volumes were associated with increased risk of ED discharge in higher volume sites, while the opposite effect was seen in lower volume sites (table/ figure). Conclusion: ED crowding is associated with increased risk of discharge of TIA and minor stroke patients, but only in higher volume settings. This may reflect the burden of ED crowding in higher volume sites, as well as better access to rapid out-patient specialized follow-up. Given the lack of validated risk stratification tools, ensuring appropriate disposition among high-risk patient populations even during crowded conditions seems paramount. Objectives: Our goal was to perform a needs analysis of uninsured patients who were admitted into our hospital through the ED. The purpose was to determine if they could be linked to primary care for non-urgent needs through a transition of care program involving social workers interfacing with case managers, or a monthly retainer payment system. We hypothesized that a main barrier to obtaining health insurance was cost. Thus, if presented with an affordable option, patients would be more willing to obtain health care coverage. July to August, 2012. The survey consisted of 32 questions, inquiring about the reasons keeping patients from obtaining health insurance or pursuing primary care at low-cost community clinics. The inclusion criteria were: patients who were uninsured, 18 years or older, and admitted into the hospital through the emergency department. Results: Fifty patients completed the survey, one patient withdrew. For these patients, the main reason for not having coverage was loss of employment (53%). Other reasons reported were cost and selfemployment. Interestingly, 88% of the undocumented and 74% of the documented patients were able to pay a low-cost premium. More than half of all patients (62%) reported having nowhere to go for routine care. Patients who had not visited physicians for more than one year (67%) had not done so due to cost. Most patients expressed a lack of familiarity with low-cost community clinics but were interested in learning about them. The majority of patients reported unemployment and cost as the main reasons for being uninsured. An affordable monthly retainer payment program could have a positive effect on these patients allowing them to seek primary care for non-urgent health care needs. The implementation of such a program will be considered in addressing overcrowding in our ED. A major limitation to our study was the small sample size. We expect to expand our sample size in the future. Public Objectives: To determine factors at the U.S. population level associated with interest and participation in medical research and specifically, interest in the ED setting. Methods: We conducted a cross-sectional household survey using a nationally representative web-enabled sample, including households that received free computer hardware and internet access if they wished to respond but were not previously online. Primary outcomes included: interest in medical research participation, preferred location for participation in medical research, and previous participation in medical research. We conducted standard bivariate and multivariate analyses using Stata 12. We applied survey weights to permit national inferences. Results: Our sample included 2,668 adult respondents (response rate = 61%). Of those, 11% reported prior research participation and 42% expressed general interest in participating in medical research. In regression analyses, general interest in research participation was associated with a household income between $25,000 and $50,000 (three-fold higher odds of interest than household income less than $12,500). Hispanic ethnicity was associated with lower odds of general interest (OR 0.41, 95% CI 0.19-0.91) than non-Hispanic whites. Other factors such as age, sex, and education were not associated. Substantially higher proportions of adults expressed interest in research conducted in primary care (83%) and inpatient settings (77%) as compared with research conducted in the ED (23%). Of note, odds of interest in ED-based research participation did not differ by age, education, income, race/ethnicity, insurance, or previous research participation. In this nationally representative sample, adult interest in participation in medical research varies widely by research setting. The ED is less favored than inpatient and primary care settings, which has implications for advancing emergency medical care. Investigators may need to enhance recruitment mechanisms to engage willing participants for ED-based research. Objectives: Our aim is to determine if persons who identify as TG have avoided seeking ED care and to assess the experiences of those TG people who have been ED patients. We hypothesized that many TG persons avoid or defer seeking care when they may have needed it and that those who have been to EDs have had negative experiences related to their TG identity. Methods: This is an anonymous survey of a convenience sample of TG patients. The survey collects both quantitative data and qualitative narratives of past experiences and satisfaction. Surveys were distributed to self-identified TG individuals at a local New York City health clinic that serves the transgender community. Results: Thirty-four people have completed the survey. 7.4% of the 27 who wanted to use the ED did not. Of those who reported past ED visits, 42% of respondents reported negative experiences, 20% had positive experiences, and 12% reported mixed experiences. More than half of those with negative experiences referred to ED staff using incorrect gender pronouns. Participants offered the following recommendations: Providers should use the patient's preferred pronoun and should only discuss TG status when it is relevant to the medical issue at hand. Conclusion: ED care is accessible to the TG people in our survey population but many TG patients report negative ED experiences related to a perceived lack of respect and incorrect pronoun use by ED staff. Our study participants may reflect a biased sample population: nearly 76% of our participants at least graduated college and all are already connected to health care. These data indicate the need for further research, perhaps recruiting from a population that is not as well connected with health care. This survey supports that ED providers should receive training on how to care for this patient population in a culturally sensitive manner. Objectives: To prove the financial and logistical feasibility of a clinical pharmacist in an urban VA Medical Center ED. We performed a prospective observational study in which residency trained doctors of pharmacy provided clinical pharmacy services in the Atlanta VA ED. Over a 2-week period, a 30-hour pilot was conducted in which patients were selected to receive the following services: anticoagulation consult, diabetic education, pharmacokinetic consult, medication history, medication reconciliation, formulary management, medication refills, therapeutic interchange, screening for drug interactions, allergy review, and non-formulary/restricted medications requests. Additionally, the pharmacist reviewed ED charts and offered assistance in order clarification, IV compatibility, eliminating duplicate therapy, and screening for medication errors and adverse drug events (ADEs). Cost avoidance estimates for these interventions were made using existing models from Lee et al and Ling et al and were adjusted for inflation using the Bureau of Labor and Statistics' consumer price index. Results: A total of 42 patients received 71 interventions by the clinical pharmacists. The estimated cost avoidance value for adjusted dose or frequency of medication was $1,486.00; elimination of duplication of therapy was $205.00; prevention or management of ADEs was $1,374.00; prevention or management of allergies was $1,721.00; education/information inquiry was $512.38; formulary management was $174.80; and therapeutic interchange was $174.80. Prescription refills were valued at $12.50 each using the difference between physician and pharmacist salary. The aggregate cost avoidance over the study period yielded potential savings of over $40,000. These data extrapolated into a yearly amount of over $2.7 million. Medical Center is financially and logistically feasible and provides a potential cost savings benefit of over $2.7 million annually. Objectives: In this study we aimed to identify changes that could increase customer satisfaction amongst emergency department (ED) patients. Methods: In phase I, an external vendor estimated patient satisfaction by randomly conducting 50-question telephone interviews of adult ED patients seen between Jan. 2009 and Mar. 2012. In phase II, we surveyed consecutive adult ED patients during a 3-week period in October 2012 to elicit determinants of patient satisfaction using Kano model. The survey cards consisted of negative and positive questions addressing four change proposals: 1) providing access to provider background, 2) receiving follow-up communication after the ED visit, 3) shared decision making between provider and patient, 4) protocolbased testing prior to provider evaluation. To prevent bias by the health care team, the survey cards were distributed to patients by registration staff. Results: In phase I, 4976 completed questionnaires were received. The "patient's perception that their physician understands and cares for their concerns" was identified as a key opportunity for improvement of overall customer satisfaction ( Figure 1 ). The results of phase II surveying are displayed in Figure 2 . On Kano analysis of responses, receiving physician biographies and follow-up letters were exciters and share decision making was rate-related, while protocoled investigations prior to being seen by physician was unrelated to patient satisfaction. Conclusion: With regards to ED patients in this study, customer satisfaction is highly related to their bond with the clinician. Kano analysis suggested that exciters of customer satisfaction included having better access to biographical data of emergency providers, and having follow-up communication with patients. Shared decision-making between clinicians and patients is likely to have a rate-dependent effect on satisfaction scores, whereas protocol-based testing is unlikely to significantly affect customer satisfaction. Objectives: The purpose of this study was to characterize the current staffing models for ICUs in Iowa. We hypothesize that critical care services in Iowa are primarily provided by non-intensivists and much of the after-hours and procedural coverage for the ICU is provided by emergency physicians simultaneously covering the emergency department. Iowa hospitals with ICUs. A standardized questionnaire was used to collect information about staffing patterns of each ICU, and a single research assistant coded responses. Results: Out of 122 Iowa hospitals, 58 (48%) had ICUs and 46 (72%) of those hospitals participated in the study. Thirteen hospitals (28%) have in-house critical care physicians overnight, and 41% of hospitals rely on EPs to manage ICU emergencies during non-peak hours. EPs are the exclusive physician seeing ICU admissions during non-peak hours in 28% of hospitals, 61% rely on EPs to provide in-hospital airway management, and 88% require emergency physicians to respond to inpatient cardiac arrests while simultaneously covering the emergency department. In 29% of hospitals, hospital-based paramedics provide emergency airway management, and in 12%, they respond to inpatient cardiac arrests. Conclusion: In a rural Midwestern state, hospitals frequently rely on emergency physicians to cover in-hospital critical care emergencies, likely due to their specific skill set and availability. Investigators should continue to study how these policies affect ED patient care, and training programs must understand the prevalence of this practice to best prepare their trainees for community practice. Background: Technology -based ("TECH") interventions offer an opportunity to address high-risk behaviors in the ED. Prior studies suggest behavioral health strategies are more effective when gender differences are considered. However, the role of gender in ED patient preferences for TECH interventions has not been examined. Objectives: To assess whether patient preferences for TECH interventions varies by gender. Methods: This was a secondary analysis of data from a systematic survey of adult ( ! 18), English-speaking patients in a large urban academic ED. Subjects were randomly selected during a purposive sample of shifts. The iPad survey included questions (validated when possible) on access to technology, preferences for receiving health information, and demographics. "TECH" was defined as web, text message, e-mail, social networking, or DVD; "Non-TECH" as in-person, written materials, or landline. We calculated descriptive statistics and used univariate tests to compare men and women. Gender-stratified multivariable logistic regression models were used to examine associations between other demographic factors (age, race, ethnicity, income) and TECH preferences for information on specific risky behaviors. to provide public health and medical interventions. mHealth encompasses a broad array of applications and technologies ranging from highly interactive web-based social media to simple broadcast text-message systems. Due to its low cost, ease of implementation, and scalability mHealth has the potential to transform an ED visit into the beginning of a lasting diabetes self-care plan, particularly in resourcepoor, safety-net settings. Objectives: Evaluate the effects of a 6-month unidirectional textmessage program (TExT-MED) designed to educate, motivate, and empower patients with poorly controlled diabetes identified in an underserved ED setting via a randomized controlled trial. Methods: Consecutive adult patients with text-message capable mobile phones and poorly controlled diabetes (HbA1C>8) were recruited from the ED at LAC+USC. Subjects were randomized to usual care versus the TExT-MED program which consisted of two text messages daily for 6 months. Messages were delivered in English or Spanish and constructed from the following categories: 1) educational and motivational texts, 2) medication reminders, and 3) challenges to perform healthy behaviors (e.g. read a food label today). The primary outcome of interest was change in HbA1C across treatment arms. Secondary outcomes included medication adherences, self-efficacy, diabetes-specific quality of life and knowledge. Results: 128 patients were randomized, and 45 control patients and 47 intervention patients returned for 6-month follow-up. Overall, diabetes outcome metrics improved in the intervention group, but only increased medication adherence reached statistical significance (Morisky Medication Adherence Score increased by 0.9 in intervention group and dropped by 0.1 in control group, p=0.03). HbA1C went from 10.2 to 9.0 in the TExT-MED arm and from 10.0 to 9.2 in the control arm; p=0.23. The difference between treatment groups in other outcomes were as follows: Diabetes Empowerment Scale (0.2 unit improvement, p=0.58), Problem Areas in Diabetes (6.7 unit improvement, p=0.32), and Diabetes Knowledge Test (0.6 unit decline, p=0.09). TExT-MED is a low-cost, highly scalable system that improved medication adherence in resource-poor ED patients with poorly controlled diabetes seen in a safety net system. Although there were trends towards changes in other health measures, due to the small sample size, other changes did not reach statistical significance. Background: Text-based mobile health (mHealth) has been used effectively in a wide variety of programs including education for chronic disease (diabetes, obesity), to enact behavior change (e.g. smoking cessation), and to promote maternal and child health (Text4Baby), and is being promoted by the CDC. To date, however, the satisfaction with this messaging platform has not been evaluated in lowincome, under-served, and non-English speaking populations. Objectives: To assess the opinions of and satisfaction with a 6month text-based mHealth program (TExT-MED) for patients with poorly controlled diabetes in an underserved, safety-net ED. Methods: Consecutive adult patients with text-message capable mobile phones and poorly controlled diabetes (HbA1C>8) from the ED at LAC+USC were randomized to usual care versus a 6-month unidirectional mHealth intervention (TExT-MED). The TExT-MED program consisted of two text messages daily for 6 months in English or Spanish in the following categories: 1) educational and motivational messages, 2) medication reminders, and 3) challenges to perform healthy behaviors (e.g. read a food label today). Patients assigned to the TExT-MED arm filled out a satisfaction survey upon trial completion. All questions were either a five-point likert scale (strongly agree and agree were grouped as positive for analysis) or yes/no. Of the 64 patients in TExT-MED, 47 completed the satisfaction questionnaire and overall results were highly positive. 71.9% of participants were Spanish speaking. 93.6% of subjects felt text messages were a good way to teach them about diabetes, that they enjoyed the program, and that they understood all the messages. 85.1% of patients felt the number of messages per day was appropriate for them. The medication reminders were the favorite type of message, as 89.4% felt they helped significantly with medication adherence. 78.7% wanted to continue the program after its completion, and 100% reported they would recommend the program to a family member or friend with diabetes. Conclusion: Patients with diabetes seen in a safety-net ED had overwhelmingly favorable impressions of a mHealth program designed to empower them and assist with their diabetes care. Background: Timely and appropriate outpatient follow-up to ED visits reduces health care costs and inefficiencies within the system, and improves health outcomes. These appointments are often made or recommended in the ED, but attendance is low (less than 70% at LAC+USC). Patients most often report that they failed to attend scheduled appointments as a result of forgetfulness or confusion regarding dates, times, and locations of these appointment. A simple text-message reminder system may be the key to resolving this clinical problem. Objectives: To study the effect of a fully automated text-message reminder system on patient attendance to follow-up outpatient appointments. sample of ED patients prior to discharge to screen for eligibility. Patients were included if they had appointments scheduled between 3 days and 4 weeks of the ED visit. Patients were excluded if they were critically ill, admitted from the ED, had no subsequent outpatient appointments, did not speak English or Spanish, did not own a mobile phone or know how to receive a text-message, or had a mobile phone carrier incompatible with the text message delivery system. The RAs provided a list of upcoming appointments to all patients. Patients were randomized to usual care versus the text reminder arm. Patients in the treatment arm received text message appointment reminders including date, time, and location at seven, three, and one day prior to their appointment. Attendance at these appointments was collected by RAs reviewing their outpatient records 30 days after enrollment, and the attendance rate of the groups were compared with a two-sample t-test. Results: 2365 patients were approached. 1991 were excluded (254 for critical illness, 81 for language other than English or Spanish, 278 for no mobile phone, 76 for not knowing how to receive a text message, 72 for refusal of consent, 287 for admission, and 812 for no appointments scheduled and 128 for Metro PCS). 374 patients were randomized, and 46 of those in the intervention group were dropped from the analysis due to not receiving the messages or opting out. Groups were similar in age, language, and ethnicity. The intervention group attended 73% of outpatient appointments compared to 62% of the control group (p=0.045). A simple text message reminder system increased patient attendance at outpatient appointments following ED visits, potentially improving health outcomes and inefficiencies in the system. care as "at the breaking point." The concern that population aging could cause the system to pass that breaking point has not been quantified. Increasing racial diversity is a related factor, because minorities visit the ED more than whites. Objectives: Quantify the extent to which demographic change alone may cause increases in ED demand beyond that predicted by population growth. Ambulatory Medical Care Survey (NHAMCS), we determined 2009 ageand race-stratum-specific ED visit rates, lengths of stay, and hospitalization rates. We applied these values to the US population anticipated by the US Census Bureau through 2050. US Census Bureau data and predictions include the entire US population. NHAMCS is a four-stage probability sample of all non-federal US ED visits US. Assuming stratum-specific rates remain at 2009 levels, we predicted ED visit frequency, aggregate length of stay, and hospitalizations every five years to 2050. Our main outcome measure was the ratio of visit frequency change versus population change at each future time point. Secondarily, we predicted ED visit frequencies assuming that stratumspecific ED visit rates continue to increase as they did from 1993-2009. The Census Bureau predicts that the US population will increase by a factor of 1.4 from 2009 to 2050. We predict that the frequency of ED visits will change by the same factor, 1.4, i.e., 1.0 times the rate of population change; aggregate length of stay by 1.5 (1.1 times population growth); and ED-to-hospital admissions by 1.8 (1.2 times population growth). If stratum-specific visit rates continue to increase as they did from 1993-2009, we predict that by 2050, ED visits would increase by a factor of 1.9 (1.3 times population growth). Conclusion: Demographic changes predicted by the Census Bureau will not cause US ED visits to increase beyond what is expected due to simple population growth. However, aggregate length of stay and ED-to-hospital admissions will increase faster than population growth, which will exacerbate ED and hospital crowding. The Objectives: Our primary objective was to determine whether the site of discharge after hospitalization (home, home with services, or to a transitional facility) affected the likelihood of being readmitted through the ED. We included all ED visits with a hospitalization in the prior 30 days. We hypothesized that patients coming from home would end up with a higher likelihood of admission. Methods: Using administrative data from a single tertiary academic hospital, we identified all relevant ED and index hospital visits along with the predictors of interest. The primary outcome of interest was readmission. The primary co-variate was disposition after the initial hospitalization (namely home compared to home with services and nursing home/rehabilitation facility). Additional co-variates included age, sex, language, insurance, and time to repeat visit. We fit univariate regression models to determine the predictors with an effect on readmission, then multivariate regression to determine the effect of significant predictors on the association of index disposition with return disposition. Results: There were 8,098 ED visits studied. The table provides a description of the factors associated with readmission through the ED (only primary co-variate is included in the table). Of all the co-variates considered, sex was the only non-significant factor in the univariate analysis. Multivariate analysis showed that being discharged to a transitional facility or home with services was associated with significantly greater odds of admission than patients discharged to home (OR 2.16, p < 0.0001). This effect remained even after adjusting for all significant variables in the univariate analysis. Conclusion: Our findings could speak to particular limitations in care management or in transitional facilities for complex patients after hospital discharge. There are also substantial demographic differences that deserve further attention. We are limited by a single institutional study and did not control for co-morbidity. Our findings should help refine the scope of research and interventions to reduce readmissions through the ED. Based on the Charlson comorbidity index score, super users who had never been admitted also had a lower comorbidity (with a score of 0) than those who had been admitted at least once (43.4% vs. 16.8% with a score of 0; diff=26.6; 95% CI=24.5, 28.7). Conclusion: A small number of super users were responsible for a disproportionate share of ED visits with a 37:1 visit to patient ratio. Super users who were never admitted, despite having visited the ED more than 20 times in a year, had lower comorbidity scores than their admitted counterparts and a higher incidence of pain-related primary diagnoses. have difficulty understanding the health information presented to them. Health literacy is a very important yet understudied topic, and there is little research focused on Spanish-speakers. Much work is needed in improving communication with this group. SAHLSA-50 has been validated as a quick tool to screen for low levels of health literacy in the Spanishspeaking population. There is a paucity of data about patients screened using SAHLSA-50. Objectives: We sought to determine the health literacy rate of our Spanish-speaking population in the ED using the SAHLSA-50 tool and to determine if there was a correlation between reported education level and a passing score on SAHLSA-50. We surveyed a convenience sample of 300 patients who presented to our busy, high-volume, urban ED. All subjects completed the SALHSA-50 tool and demographic form with Spanish-speaking research assistants. Results: 67% of the respondents were women. 8% were age 18-25, 42% were 26-40, 45% were 41-65, and 5% were 65+. 11% had less than 3 years of school, 30% had 4-6 years of school, and 59% had at least 7 years of school. As defined by the SAHLSA-50 tool, a score of above 37/50 correct responses was used to determine health literacy. Overall, 83% respondents were health literate. Years of school and health literacy rates were similar between men and women. Those with less than 3 years of school were < 40% health literate versus 74% in those with 4-6 years of school and >95% in those with 7 or more years of school. The elderly (>65 years) reported least years of school completed (on average 4-6 years) and had the lowest health literacy (56.3%). Certain words (prostate, jaundice, impetigo, syphilis, potassium, and gallbladder) were commonly not known or missed by >40% of the population. Conclusion: Our Spanish-speaking population had an overall health literacy rate of 83%. Importantly, those with lower levels of education and elderly patients were more likely to not be health literate. This is an important first step in identifying vulnerable groups who would benefit from improvements in communication as well as identifying certain problem words in the SAHLSA-50 tool. Variation Background: ED admission rates vary markedly across hospitals. Many ED admissions are discretionary, and differential admission practices may be influenced by multiple patient and hospital factors, including payer mix, which is an indicator of the financial incentives and pressures faced by hospitals. Objectives: To evaluate the association of payer mix and other hospital characteristics with variation in ED admission rates. A cross-sectional analysis was conducted of over 22 million ED visits using the 2009 Nationwide Emergency Department Sample (NEDS), the largest all-payer database of ED visits. All adult patients who survived to disposition from the ED were included in this analysis. Hospital-level variables were constructed from visit-level data, including proportions of each payer type (private, Medicare, Medicaid, uninsured), patient characteristics (demographics, household income, case mix), and hospital characteristics (region, trauma designation, teaching status, ED volume, metropolitan location). Multiple linear regression was used to assess the relative association of payer mix with variation in ED admission rate, adjusted for patient and hospital characteristics. Results: Across 964 hospitals in the analysis, the median admission rate was 15.3% with significant variation observed among hospitals (IQR 9.2-20.6%). In multiple regression, adjusting for age, sex, and income of patients, no associations were observed between payer mix variables and admission rate. Payer variables explained less than 2.6% of the total variation in admission rates. In contrast, compared with EDs in the lowest quartile of annual volume, EDs in higher quartiles of volume had significantly higher admission rates by 5-7 percentage points (p<0.001). Volume alone accounted for 23% of variance in admission rates. Census region, trauma designation, metropolitan location, and teaching status were also significantly associated with higher admission rates (all p<0.001), and with ED volume jointly explained 31.1% of the observed variation in ED admission. Case mix explained an additional 8.6% of the variance in admission rate. Conclusion: Payer mix does not appear to be associated with variation in ED admission rates among hospitals. Rather, ED volume is more strongly associated with admission rates, for reasons that merit further investigation. Objectives: We hypothesized that condition-specific admission rates within EDs would be correlated. We described 1) variation in ED riskstandardized admission ratio (RSHAR) for frequently admitted conditions, and 2) the degree of within-hospital, condition-specific ED RSHAR correlation. Methods: Cross-sectional analysis of the largest all-payer ED dataset, the 2009 National Emergency Department Sample. The top 10 conditions resulting in admission after ED visit were grouped using Clinical Classification Software. Exclusions: age <18, death in ED, left AMA, or unknown disposition, EDs with <25 visits per condition. Primary outcome was the condition-specific ED RSHAR, calculated using hierarchical logistic regression models that account for patient age, sex, Charlson Comorbidity Index, insurance status, median zip code income, and clustering of patients within hospitals. The RSHAR is a ratio of model-predicted admissions to expected admissions for a hospital of similar case-mix: a ratio >1 indicates a higher than expected admission rate. We report condition-specific ED RSHAR and withinhospital Pearson correlation coefficients. Results: Of 28,861,047 ED visits from 964 hospitals, 4,395,537 (15.2%) resulted in admission. There was significant variation in conditionspecific ED-RSHAR with high model performance (C-statistic >0.85). The conditions with the highest and lowest degree of ED admission variation were nonspecific chest pain and septicemia, respectively (figure). Condition-specific ED RSHAR correlations (table) were uniformly positive (p<0.0001), and pneumonia and CHF were most correlated within hospitals (0.8). Conclusion: There is significant variation in the condition-specific rate of hospital admission from the ED across the US. High variation conditions such as chest pain should be targets for increasing efficiency. High within-hospital correlations between conditions suggest that interventions to reduce admissions should address hospital practice patterns in addition to condition-specific care pathways. Objectives: Examine the charges related to MC injuries compared to gun related injuries in a large academic hospital. Our hypothesis is that MC injuries in aggregate are more costly than gun related injuries. Methods: A nested cohort study was conducted of trauma registry entries from an urban medical center between 1/1/07 and 12/31/11. The registry was queried for all e-codes related to MC and gun trauma. Registry data were augmented by, and matched with hospital charges and reimbursement data corresponding to individual injury dates. Charge and demographic data for each group were compared using descriptive statistics and 2-tailed, Student's t-test for the difference between means. Background: Although patients with severe hyperglycemia can be critically ill, many may be safely discharged home. It is common practice to provide insulin and/or IV fluids to lower glucose levels prior to discharge. There are, however, no data supporting this practice. Objectives: To determine the association between discharge glucose levels and glucose reduction and short-term adverse outcomes. Methods: Retrospective observational cohort study at a highvolume urban Level I trauma center between January 2010 and December 2011 of all discharged ED patients with glucose >=400 mg/ dL at any point in the visit. Exclusion criteria included type 1 diabetes. Arrival and discharge glucose levels were recorded. Short-term outcomes at 7 days were defined as: return ED visit for hyperglycemia-related complaint, diabetic ketoacidosis (DKA), hospitalization for any reason, and death. Data were analyzed using chi-square, ANOVA, and multiple regression. Results: 422 patients with 511 encounters were identified. 88 (17.2%), 215 (42.1%), 153 (30%), and 55 (10.7%) patients were discharged with glucose levels 250 mg/dL, 251-350 mg/dL, 351-450 mg/dL, and >450mg/dL, respectively. 180 (35.2%), and 331 (64.8%) patients had a glucose reduction of >200mg/dL and 200 mg/dL, respectively. At 7 days, 52 patients (10.1%) had any adverse outcome, 40 (7.8%) had a return ED visit, 2 (0.4%) had DKA, and 22 (4.3%) were hospitalized. There were no deaths. Iatrogenic hypoglycemia occurred in 9 patients (1.7%). See figure for detailed results. Chi-square analysis showed no association between discharge glucose level and any adverse outcomes. Patients with glucose reduction of >200 mg/dL compared to 200 mg/ dL were more likely to return to the ED (13.83 vs 4.83%, p=0.001). There was no association between the amount of glucose reduction and any other adverse outcome in univariate analysis. When a multiple regression model was applied, controlling for arrival and discharge glucose, glucose reduction, sex, and insulin and fluid administration, however, there was no significant association between any of these variables and any adverse outcomes. See table for detailed results. Conclusion: In contrast to commonly held belief and practice, discharge glucose levels and the amount of glucose reduction are not associated with short-term adverse outcomes in patients with type 2 diabetes with severe hyperglycemia. Pediatric ED Observation Objectives: To compare costs of care for three common conditions (respiratory illnesses, dehydration, and cellulitis) following implementation of ED-based observation protocols. We hypothesized that costs associated with observation care would be lower than inpatient admissions but higher than ED discharges. We conducted retrospective analyses of health system administrative and finance data for visits to a tertiary care pediatric ED during the first year observation protocols were available (April 2009 to March 2010). Visits were included based on ICD-9-CM codes selected a priori to represent the three conditions of interest; and excluded were visits with ED lengths of stay (LOS) under 4 hours, admissions exceeding 2 days, and ICD-9-CM codes indicating extreme severity of illness or complex comorbidities. Total costs, ED costs, and the proportion of total cost attributed to the ED were compared using descriptive statistics and pairwise Mann-Whitney tests. Results: Of 1,134 visits that met inclusion criteria, 49% were ED discharges, 18% were cared for on ED observation protocols, and 33% were inpatient admissions. Objectives: To evaluate the relationship between ED admission case volume and inpatient hospital mortality. Methods: Using data from the Nationwide Inpatient Sample, a nationally representative sample of hospital discharges, we examined mortality associated with ED admissions for eight different diagnoses (pneumonia, congestive heart failure, sepsis, acute myocardial infarction, stroke, respiratory failure, gastrointestinal hemorrhage, acute renal failure) and overall between 2005 and 2009 (total number of patients, 18.5 million). These conditions were chosen because they are frequent (in the top 25 of all ED hospitalizations) and high risk (>3% unadjusted likelihood of hospital mortality). EDs were excluded from analysis if they did not have at least 500 admitted annual cases for each diagnosis after hospital transfers were removed. EDs were then placed into quintiles based on hospitalized case volume for each diagnosis. (mHealth) is an emerging health management tool for patients using health-related text messages, email, instant messages, and "apps" on their mobile phones. mHealth has potential to affect the lives of ED patients, by allowing physicians to quickly link them with information on chronic health conditions, while also communicating test results and appointment times. Latinos, who face significant barriers to accessing care, represent an excellent target for mHealth solutions, but the potential for success is highly dependent on their capacity to interact with their mobile phones. Objectives: To assess the mobile capacity of Latino patients in an inner city ED, with particular emphasis on those with chronic disease, and compare our findings to national estimates from the Pew Research Center. Methods: A consecutive sample of Latino patients in the LAC+USC ED were given a survey designed to assess their mobile phone ownership and elucidate the specific features of their mobile phones they knew how to use (a replication of information gathered in the Pew Research Centers report). We also gathered basic demographics and information about health status and chronic medical conditions. Results: In one month, 329 Latino ED patients were surveyed. The proportion of patients who owned a mobile phone was similar in our sample as compared to the Pew report (73% vs. 76%) as was the ability to send/receive text messages (52% vs. 55%). The proportion who could utilize more advanced functions was far less in our sample than reported by the Pew Research Center: "apps" 10% vs. 58%; instant messaging (IM) 16% vs. 34%; mobile internet 19% vs. 31%; send/receive e-mail 14% vs. 27%. When we compared patients with no chronic diseases with patients with at least one chronic disease, mobile capacity dropped by almost half in all categories except the ability to send and receive text messages. Conclusion: Latino patients in our inner city ED had high cell phone ownership rates and knew how to send/receive text messages. However, the capacity of our patients to interact with more advanced mobile phone functions was far less than described in national estimates. This capacity decreased even further in Latinos with chronic disease. Researchers/health care systems should be mindful of this digital divide and focus on simple, text message based systems when designing mHealth solutions for this population. The Background: Over the past decade, there has been a substantial increase in the utilization of prescription opioids in the U.S., coupled with an even more dramatic increase in opioid abuse and opioid-related fatalities. Prior studies have demonstrated an increased use of opioid analgesics in the emergency department (ED), yet trends over time and contributions of specific agents are less well characterized. Objectives: To describe trends in opioid prescribing in adult ED patients. Methods: Publicly available data from the National Hospital Ambulatory Care Survey (NHAMCS) from 2001 to 2010 were analyzed. Medications given in the ED and prescribed at discharge, including Drug Enforcement Agency (DEA) schedule II narcotics, DEA controlled substances (III-V), and non-opioid analgesics were tabulated, focusing specifically on opioid analgesics commonly used and those with high abuse potential. To determine if acute painful conditions treated in EDs were becoming more frequent over time, we evaluated if the primary reason for visit was "pain-related" or "non-pain-related". Conclusion: We found a near doubling in the use of opioid analgesics in U.S. EDs in the past decade, coupled with a modest increase in pain-related complaints and non-opioid analgesic utilization. Oxycodone and hydrocodone, agents with high abuse potential, were also the most commonly prescribed opioids in ED patients. ED providers must continue to be vigilant in assessing and treating pain, while minimizing the potential for opioid-related abuse and injury. Methods: Data were prospectively collected for all OOU patients, including extremity cellulitis, fractures, and spine injuries awaiting brace placement. The outcome variable was admission to the hospital vs. discharge home with a secondary outcome of return within 30 days for the same problem. Independent variables included: age, sex, diagnoses, location of problem, antibiotics given, culture results, MRSA +/-, IV drug use history, mechanism of injury, radiology reads, abnormal neuro checks, uncontrolled pain, number of hours in OOU, presence of diabetes/ chronic disease and SIRS criteria (temperature, respiratory rate, heart rate, and WBC count). We used bivariate analysis to determine independent variables associated with the outcome of hospital admission. Logistic regression modeling was used to account for confounding between variables. A priori power analysis for the primary outcome variable of hospital admission indicated that with 100 patients/group (admitted vs. discharged), assuming equal sized groups, there was an 80% power to detect a difference of 20% in an independent variable. Results: Data were prospectively collected from 8/2011-8/2012 for 199 consecutive orthopedic observation unit patients; 62% were male. Diagnoses were infection (cellulitis or abscess of extremity) 76%, fracture 15% and other 9%. Sixty-one patients (31%) were admitted and eight patients (4%) had return visits for the same problem within a 1 month period. There were no significant relationships between any of the independent variables and admission on bivariate analysis. Multivariable logistic regression found no significant predictors of hospital admission. Logistic regression was not performed on 30-day return because of the low event rate (4%). Conclusion: An OOU prevented 138/199 (69%) patients from a hospital admission. There were no significant predictors of which patients would require admission. The lack of significant predictors is important in suggesting that without the ability to predict which patients require admission, a system using an OOU can reduce admissions by more than 2/3. Pain Objectives: Our objective is to evaluate acute pain management practices in children with multi-system traumatic injuries. Methods: A chart review was performed on all pediatric patients (<18 years) who had trauma activation from 5/2010 through 5/2012 at a Level I adult and pediatric trauma center. The trauma activation criteria are consistent with ACS recommendation. A total of 234 discrete data elements were abstracted from each patient. They include demographic information, type of trauma, injury type/location, interventions, pain and management, and diagnosis. Also included are time to pain medications and dose. Descriptive statistics were used for data analysis. Results: A total of 469 pediatric trauma patients were seen during the study period. The mean age was 11 years, 39% were female, and 86% were Caucasian. The most common mechanisms of injury were passenger of a motor vehicle crash (36.3%) and fall (25.8%). No pain assessment was documented for 39.0%, at least one pain assessment in 61%, and 29% had more than one assessment. The mean time to any initial analgesia in ED was 49+/-52 min. Opioid analgesia was administered in the ED in 50% of patients. The mean time to the initial opioid in ED was 42+--46 minutes. Initial opioid agents used are morphine (60%), fentanyl (31%), and hydromorphone (8%). The mean initial dose for morphine was 0.05+/-0.03 mg/kg (recommended dose 0.1-0.2 mg/kg), of fentanyl was 1.04+/-0.49 ug/kg (1-2 ug/kg), and of hydromorphone was 0.01+/-0.01 mg/kg (0.015-0.02 mg/kg). Conclusion: Based on our data, there is insufficient pain assessment documentation for pediatric trauma patients. Approximately half of patients receive opioids in the ED, but there is a significant delay in administration of pain medications. Morphine is the most commonly used among opioid agent and initial doses are less than recommended for all opioids administered. Additional information is needed to better understand pain management for children with multisystem traumatic injuries. The Objectives: The aim of this study was to examine the association between parental language and risk of ED return visit within 72 hours. We completed a one-year follow-up of a cohort of children aged 2-24 months with fever/respiratory illness from a single tertiary care pediatric ED. At the initial visit, triage acuity and parental language were recorded. Parents with a primary language of Spanish, or other with self-reported fluency in Spanish, were defined as Spanishspeaking. After one year, a standardized chart review was conducted. The primary outcome was the number of ED visits within 72 hours of the index visit. The total number of ED visits over the year, the child's insurance, and primary care practice were also recorded. Extended hours at each practice were confirmed by direct telephone inquiry. , in regards to sport-related concussions, the role of the school nurse includes identifying suspected concussions, advocating for the prevention of concussions, guiding the student's post-concussion graduated academic and activity re-entry process, and communicating with the athletic trainer regarding progress of the student. Objectives: To determine the compliance of school nurses with recommendations as delineated by the NASN's position statement on the role of the school nurse in the post-concussion student. Methods: An electronic questionnaire based on the position statement was distributed to school nurses identified by the NASN directory. Results: Preliminary data analysis was performed on 495 questionnaires. 66% of nurses have had special training in recognizing or managing students with concussions. School nurses are involved in the following roles in regards to the care of the post-concussion student: identifying suspected concussions (81%), providing advocacy for the prevention of concussions (68%), guiding the student's postconcussion graduated academic and activity re-entry process (57%), providing daily medical evaluations (26%), communicating with the athletic trainer regarding progress or setbacks (49%), and providing emotional support for recovering students dealing with concussionrelated depression (58%). 49% of nurses work in districts that have established policies that help students recovering from concussions succeed when they return to school. 53% of nurses work in schools that have guidelines to assist students when returning to school following concussions. These guidelines include excused absence from class (66%), rest periods during the school day (66%), extension of assignment deadlines (69%), postponement or staggering of tests (59%), accommodation for light or noise sensitivity (48%), use of a note taker or scribe (23%), and temporary use of a tutor (22%). Conclusion: Most school nurses are not in compliance with recommendations as delineated by the NASN. Guiding the student's post-concussion graduated academic and activity re-entry process and establishing specific guidelines to assist students when returning to school are identified areas for improvement. Objectives: To determine length of stay (LOS), disposition, and ED return visits for children treated on ED observation protocols. Methods: Nine observation protocols were made available for use in our tertiary-care pediatric ED in April, 2009. We retrospectively retrieved records for all patients treated on protocols in the first year and extracted administrative data. Descriptive statistics were calculated for LOS, disposition, and 30-day ED returns for each protocol. Results: 543 children were cared for on ED observation protocols in the first year. Median age (IQR) was 5 yrs (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) , and 54% were male. The most commonly used protocols were respiratory (31%), dehydration (22%), and cellulitis (19%). For the group as a whole, median LOS was 8.8 hrs (IQR 6.0-12.9), discharge rate was 81%, and 12% returned to the ED in the 30 days after index visit. Results for individual protocols are presented in the table. There has yet to be any research on AZ adherence to the guidelines as well as implications for morbidity and mortality associated with non-adherence. Objectives: Our objective was to determine if adherence to the 2002 AAP guidelines for car safety seats decreases the mortality and morbidity of children involved in MVAs in the state of AZ. The hypothesis is those who are incorrectly restrained will sustain a higher degree of morbidity and mortality. Methods: This was a retrospective cohort analysis of pediatric patients ages 0-9 who were involved in MVAs presenting to AZ hospitals from 2002-2010. Using chi-square analysis, data from the AZ State Trauma Registry were calculated. We compared data on morbidity in the form of assigned Injury Severity Score (ISS), disposition, and mortality in those who adhered to the guidelines and those who did not. Results: There were 3,445 MVAs in which children age 0-9 required medical care. Based on the available data, 62% of the children were incorrectly restrained, 17% were correctly restrained, and 20.4% were unknown. 52% of the patients were discharged home. 734 patients (21%) required ICU admission, and 33 died (1%) ( Table 1) . Of those admitted to the ICU, 70% were incorrectly restrained, 9% correctly restrained, and 20% were unknown. Of those who died, 60% were incorrectly restrained, and 27% were unknown. ISS scores were documented 86% of the time, and 91% of those who had ISS>16, died ( Table 2 ). 13% of patients had ISS>16 and were incorrectly restrained (95%CI 11.8-14.9, p<0.001). Conclusion: The majority of children ages 0-9 in AZ were incorrectly restrained. They also had a significant higher morbidity and mortality. The amount of unknown data is high. One limitation is insufficient data on location of child in the car, as well as height and weight of the child, which would affect the characterization of "proper restraints" based on AAP guidelines. Objectives: 1) Evaluate the outcomes of ASA administration, ECG performance, and DTD in adults with suspected ACS presenting to PEDs with adult ACS protocols in place, and 2) compare outcomes between subsets of cases with initial ECG interpretations suggestive (+) and non-suggestive (-) of ACS. Methods: Records from two tertiary-care, freestanding PEDs of a Methods: An online web-based survey was given to a convenience sample of English-speaking parents and/or caregivers who presented to an urban tertiary care pediatric emergency department with children 6 years of age or younger. Objectives: We compared adolescents' perceptions of their asthma symptoms with objective measurements of pulmonary function during acute exacerbations in an emergency department (ED). We hypothesize that adolescents have difficulty accurately perceiving the severity of their own asthma symptoms, potentially limiting traditional approaches to asthma control in this group. Methods: In this prospective study in a tertiary pediatric ED, patients age 10 -16 years with acute asthma exacerbations completed the Acute Asthma Quality of Life Questionnaire (AAQLQ) and the Asthma Control Test (ACT). The AAQLQ measures emotional distress during an asthma exacerbation. The ACT assesses asthma control over the prior 4 weeks. The treating physician then calculated an asthma severity assessment (ASA) score using respiratory rate, work of breathing, oxygen saturation, and wheezing. Pearson correlation coefficient determined correlation between the ASA score and the AAQLQ and ACT scores. Linear regression evaluated the relationship between age, sex, and asthma history (use of an asthma action plan, controller medication, number of ED visits, and hospitalizations for asthma) and AAQLQ, ACT, and ASA scores. Results: Forty-nine patients have been enrolled to date. The Pearson correlation coefficient comparing ASA to AAQLQ was -0.12 (p = 0.42; 95% CI -0.39 to +0.17). The Pearson correlation coefficient comparing ASA to ACT was -0.22 (p= 0.15; 95% CI -0.47 to +0.08). Both indicate poor correlation between subjective perception of symptoms and objective measures of disease severity. Linear regression found that a higher number of asthma ED visits correlated with worse ACT scores, but that a higher number of hospitalizations for asthma was associated with better AAQLQ scores. No relationship was found between sex or use of an asthma action plan and survey scores. Conclusion: Adolescents inaccurately perceive the severity of their asthma exacerbations, yet parents may depend on their adolescent children to inform them when experiencing symptoms. This may lead to delayed recognition and treatment of their asthma and increased morbidity and mortality. Results: Children presenting with SIRS constituted at least 13.3% (95% CI: 12.6 to 14.1%), moderately 17.1% (95% CI: 16.4 to 17.9%), and at most 20.8% (95% CI: 20.0 to 21.6%) of ED visits. Taking the minimum and maximum estimates as modified credible intervals (mCI), we report an overall incidence of pediatric SIRS presenting to the ED to be 17.1% (95% mCI: 13.3 to 20.8%). The national moderate estimate of pediatric ED visits with SIRS was 5,515,500 per year. Children with SIRS and without SIRS had similar baseline characteristics, but SIRS patients were younger, had higher triage acuity, were more often admitted, and had higher ICU admission rates than children without SIRS. Infection was the most common (45%) associated etiology, followed by trauma (14%). Other traditional categories of SIRS were extremely rare. Of note, 39% of children with SIRS did not fall into any of the previously established categories. Conclusion: Pediatric SIRS is common; its associated clinical contexts include potentially dangerous etiologies; many cases of SIRS can be recognized early in presentation (i.e. in triage); and there is significant heterogeneity in the etiology of SIRS. Objectives: To estimate the frequency of pregnancy testing among adolescent ED patients administered or prescribed teratogenic medications (FDA categories D or X) and to determine factors associated with non-receipt of pregnancy test. National Hospital Ambulatory Medical Care Survey (NHAMCS) data of ED visits by females ages 14 to 21 years. We estimated the number of visits during which teratogenic medications were administered or prescribed and pregnancy testing was not conducted. We evaluated factors associated with teratogenic medication provision and nonreceipt of pregnancy testing. Results: Participation rate was 57% (n=1,637). 78% of parents agree that finding an effective medicine to improve brain function after life threatening TBI in children is important. More than a third of parents agree with including children with TBI in research studies when parents are not present for consent, both in general and for their own children (table); less than half of parents disagree. Objectives: To externally, prospectively validate three popular clinical dehydration scales in children relative to the gold standard and to compare them to the accuracy of physician gestalt of the degree of dehydration. We prospectively enrolled a nonconsecutive cohort of children 18 years with an acute episode vomiting and/or diarrhea suspected to have intravascular volume depletion by the treating clinician at a tertiary care pediatric emergency eepartment (ED) between June 2011 and November 2012. Patient weight using a standard scale, clinical scale variables, and physician gestalt were recorded before and after fluid resuscitation in the ED and upon hospital discharge. The percent weight change from presentation to discharge was used to calculate the degree of dehydration. A weight change of ! 5% was considered clinically significant per established research norms. Receiver operating characteristics (ROC) curves were created for each of the three clinical scales and physician gestalt. Sensitivity and specifity were calculated based on the best cut off points of the ROC curve. Results: A total of 108 patients were enrolled. 89% had mild dehydration and 11% had significant dehydration. The three scales did not have areas under the ROC curve statistically different from the reference line. The Gorelick, WHO, and CDS scales had sensitivities and specificities for significant dehydration noted in the table. Physician gestalt for the detection of significant dehydration in children had a SN of 42% and a SP of 67%. Conclusion: None of the clinical scales predicted significant pediatric dehydration more accurately than physician gestalt. In addition, this is the first external validation of the Gorelick and the WHO scales and the second of the CDS in a North American population. Background: Children who are under age 2, have certain medical co-morbidities, or are hospitalized are at increased risk for influenzarelated complications. Antiviral treatment in these "high risk" patients has been shown to reduce length of symptom duration and rates of influenza-related complications. Based on this evidence, the CDC has created guidelines for recommended antiviral treatment of children with influenza. However, it is currently unknown whether these guidelines are being followed in pediatric EDs. Objectives: To determine the rate of antiviral treatment in patients with laboratory-confirmed influenza who meet CDC guidelines for recommended treatment in a single pediatric ED. Methods: This is a retrospective observational cohort study of an urban tertiary care pediatric ED. We included all patients under the age of 18 with laboratory-confirmed influenza presenting to the pediatric ED between 1/1/2009 and 4/14/2011, who met CDC criteria for recommended antiviral treatment. Patients with greater than 48 hours of symptoms were excluded unless hospitalized. Results: Ninety-six patients with laboratory-confirmed influenza met CDC guidelines for treatment. Only 31 of these patients (32%) received antiviral treatment in the ED. Patients were more likely to be treated (table) if influenza laboratory test results were available prior to discharge or admission (p = 0.01). Of the 65 patients who did not receive antiviral therapy in the ED, 60% (25/42) of admitted and 29% (7/ 24) of discharged patients subsequently received antiviral therapy. The majority (68%) of children with laboratoryconfirmed influenza who met CDC guidelines for antiviral treatment were not treated. Of these children, nearly half (49%) were subsequently treated after their ED visits. However, given that antiviral treatment of influenza is most effective closest to the onset of symptoms, the addition of either rapid diagnostic testing or presumptive treatment is essential to reduce delay between time of symptom onset and treatment. Conclusion: This comprehensive list of post-graduate programs will be useful for health care providers seeking further training in SBME as well as those seeking to develop a simulation fellowship. Further inquiry is necessary to determine which aspects of these programs contribute to successful outcomes and whether more standardization would benefit training in SBME. Background: Acute ischemic stroke is a time-critical disease process requiring efficient and thorough evaluation followed by decisive action for treatment. More than 700,000 people suffer their first strokes every year in the United States and up to 20% of them will end up dying within one year. A more favorable outcome has been shown in those patients who receive appropriate, timely administration of thrombolytics. Objectives: To assess the ability of resident physicians to meet the standard of care for treatment of acute ischemic stroke with tPA during a simulated patient encounter. Standard of care was considered administering tPA within 3 hours of symptom onset (30 minutes of simulated case time) while making no critical errors. Methods: This was a pilot study involving emergency medicine residents at all levels of training at a four-year academic residency program. A total of 30 residents participated in separate small group team-based simulations between November 2010 and November 2011. Each team consisted of a single resident physician along with two emergency department RNs. The clinical scenario was a patient with signs of ischemic stroke that began 2 hours and 30 minutes ago. The patient was not eligible for the extended stroke window due to age and comorbidities. This allowed the team 30 minutes to assess and treat the patient to stay within the 3-hour window for tPA. The patients' stroke scale fell into the 14-16 range with some variability in cases due to different individuals doing the acting. Critical actions in the case included obtaining finger stick blood glucose, diagnosing stroke (including NIH Stroke Scale), ordering appropriate medication for blood pressure control, evaluating tPA exclusion criteria, and giving tPA within the 30-minute length of the simulation. Encounters were video recorded and then later reviewed and scored by research assistants using a standardized grading rubric. Results: Out of a total of 30 residents, only 16 gave tPA within the 30-minute timeframe without making any critical errors. Overall, tPA was given by 87% (26/30) of the residents. A critical error (including failure to give tPA) was made by 47% (14/30) of the residents. Of those who gave tPA, 38% (10/26) did this after making a critical error. Conclusion: During small-group stroke simulation, only 53% of residents were able to give tPA within the 30 min timeframe without making critical errors. Dale Cotton UC Davis Medical Center, Sacramento, CA Background: The correlation between stress-related physiological parameters and performance has been studied in certain high-stress fields, such as military aviation. However, there is a relative paucity of this research as it relates to medical training. Objectives: We sought to determine a correlation between heart rate, self-reported anxiety, and performance on a standardized simulation-based skills assessment. Methods: Emergency medicine residents wore Holter monitors during their annual simulation-based exams. Maximum, minimum, and mean heart rates (HRs) and evidence of ectopy were recorded. Baseline HRs were recorded on a non-testing day. Residents reported pre-and post-test anxiety and self-assessments of their performance. Baseline characteristics included sex, age, post-graduate year (PGY) status, caffeine consumption (average daily and current), and use of medications known to alter HR. The performance outcome measure was an average of total points obtained from a predetermined checklist for each standardized scenario as assessed by multiple EM attending physicians. We report descriptive statistics and results of a multiple regression model to predict performance. Results: Of 40 eligible residents, 34 were included: 1 refused consent, 3 had technical issues, and 2 were excluded for taking prescription or over-the-counter medications known to affect HR. Elevated HR was common during simulation. The median maximum HR was 140 (IQR: 137 to 151), median minimum HR was 81 (IQR: 72 to 92), and mean HR was 117 (95% CI: 111 to 123). Ectopy was common; 8 residents had 1 to 3 premature ventricular contractions, and 1 resident displayed 28 beats of bigeminy. Pre-and post-test anxiety ratings (scale 1 to 5, 5 = maximum anxiety) were equal, with a mean of 3.3 (IQR: 3 to 4). In a multivariate regression model, only PGY status showed significant correlation to performance (adjusted R 2 for model: 0.26, P < 0.002). Conclusion: Tachycardia was common and ventricular ectopy was notable during medical simulation assessment. However, we found no significant relationship between heart rate and performance. In our small study, only PGY status was found to be a statistically significant predictor of performance. survey and during simulated medical events. Methods: Single academic medical center observational study of EM residents. Participants completed an online survey of six patient AD scenarios. Respondents assigned a code status to each patient. Unbeknownst to the survey participants, a simulation lab occurred 2 months later. The lab covered the same six scenarios from the presimulation survey. Resident teams consisted of balanced training levels. Patient history and the AD were provided as prehospital reports requiring team members to actively locate the AD documents during the code. Based upon the patient presentation and AD, team members independently assigned a code status to each patient using an Audience Response System. Responses to both the survey and the simulation exercises were summarized as descriptive frequencies and statistical significance was analyzed using the McNemar Test. Logistic regression modeling was used to determine predictors of survey responses. Results: In total, 47 residents completed either the pre-survey or the simulation lab survey, but only 17 completed both. Those who completed both did not differ from incomplete respondents in any demographic parameters. The table shows a breakdown of responses from the group. On pre-survey most assigned a DNR code to the scenarios. In the online survey, senior residents tended to assign DNR more frequently and use fewer life-saving measures. Of the 29 students who participated in the laboratory simulations, 50% decided the patient was DNR. Years of residency training had little effect on assigning code status in simulation, but residents who are more senior opted for more aggressive measures during simulation. In assessing EM resident clinical response to AD, physicians are more likely to provide life-sustaining actions in simulation scenarios than in surveys. Senior residents tend to disregard ADs more commonly in simulation than in surveys. Objectives: The goal of this study was to validate whether simulated frothy secretions increased difficulty of mannequin intubation. Methods: This was a prospective observational trial comparing two simulated airways. Participants: A total of 26 emergency medicine residents with 1 to 3.5 years experience participated. Interventions: We used two identical Trucorps Airsim Multi airway models in this study with tongues inflated to 10 cc. The control or easy model simulated an unobstructed pharynx. For the difficult mannequin, 50 cc of liquid soap detergent mixed with water and air was infused through the distal simulated esophagus to imitate an airway obstructed by copious frothy respiratory secretions. Participants were asked to intubate the mannequin using a 4 MacIntosh blade under direct laryngoscopy. Suction and assistance with equipment was made available. All attempts were timed and recorded on video and then later scored by blinded reviewers. Reviewers could pause, start, and stop the video as needed to score each attempt. Our outcome included: 1) time from opening the mouth to successful endotracheal intubation, 2) the number of laryngoscope blade adjustments, and 3) the total number of attempts. Paired t-tests were used to compare outcomes. The results for the easy and difficult groups respectively were 1) 47.61 (+/-48.92) and 43.62 (+/-22.61) seconds, 2) 1.64 (+/-1.34) and 1.12 (+/-0.33) attempts, and 3) 6.04 (+/-8.37) and 3.35 (+/-2.04) corrections. There was no significant difference between the two groups in any of the categories with the exception of the number of corrections, which were actually lower in the difficult group. Agreement on attempts between two reviewers was excellent (kappa=1.0). Conclusion: Simulated respiratory secretions did not make intubations more difficult and liquid soap may have facilitated attempts by increasing lubrication. Further revision of the models used and analysis of clinician use will be needed to produce a model that accurately simulates the difficult airway. Objectives: The purpose of this study was to determine if participation in a simulation-based curriculum could improve the learner's confidence with the initial management of several emergent medical conditions and to identify participant satisfaction with this approach to learning. Methods: This IRB-approved study used the one group, pre-test post-test design to compare rotating residents' confidence in managing medical conditions prior to and upon completing a simulation session which occurred during the first 7 months of the 2010-2011 and 2011-12 academic years. During the pre-intervention questionnaire, participants reported self-perceived confidence on 13 different medical conditions. Immediately following the simulation session, the participants indicated confidence on the same 13 conditions. A Wilcoxon signed rank test was used to analyze for differences in confidence between the pre-and postintervention confidence levels. Participant satisfaction was evaluated with a seven-question survey measured on a five-point Likert scale Results: There was significant improvement in confidence (p<0.05) in all of the simulated cases including sepsis, ST elevation MI, atrial fibrillation, and ventricular tachycardia. Four of the remaining nine non-simulated topics demonstrated a statistically significant improvement although the change in pre-test and post-test median confidence scores was less than the simulated topics. Participants indicated high satisfaction ratings for the session, especially with their agreement of the addition of the session to their rotation, as well as providing an opportunity to manage critically ill patients (100% agree/ strongly agree). Conclusion: This study demonstrated that a specific simulationbased curriculum designed to meet the needs of the rotating residents in the ED was highly rated, improved the confidence of the learners, and could be a useful adjunct to the overall educational program. Simulation Objectives: To determine if a simulation thoracotomy model and scenario is effective in building procedural competency and resident confidence, and improving resident knowledge. Methods: A prospective, observational survey study was designed with EM residents from a 3-year ACGME-approved EM residency program utilizing a simulation thoracotomy model/scenario to supplement the traditional emergency medicine training and experience. Twenty-one (90% of total) EM residents participated in the thoracotomy simulation training. The scenario was managed by an EM attending physician and trauma surgery attending physician. An experience-based survey with a Likert scale( 1=no confidence; 5=high confidence) and knowledge-based quiz was given to the participating residents before the intervention, 1 month post-intervention, and 4 months post-intervention. Methods: This randomized crossover study measured the effects of simulation-based training on performing chest compressions. CPRcertified students and ED personnel were enrolled between July and October 2011. All completed three, 1-minute sets of chest compressiononly CPR. Participants were randomized to receive instantaneous feedback (IF) during their second set of compressions or to serve as controls (C) who were not trained. First and third sets were compared to evaluate improvement. During follow-up one year later, returning participants from "C" group were trained, while "IF" group was not. Results: 100 subjects participated in the initial and follow-up phases. While "C" group demonstrated no improvement, the "IF" group demonstrated significantly improved depth compliance (39% to 81%), recoil compliance (83% to 90%), and hand placement accuracy (77% to 92%) from baseline to evaluation. Follow-up a year later of the "IF" and "C" groups showed improvements in depth over initial baseline (18.3% and 17.9% respectively), but no other parameters. The average change from the follow-up baseline performance was statistically the same across groups (18%). "IF" group's follow-up baseline compression depth and hand placement were significantly worse when compared to their post-training set one year previous (28% and 20% respectively). The number of training sessions participants received outside of the study was the same (0.9), and weakly but significantly correlated (Pearson coefficient=0.21) to the observed improvement in compression depth in both groups. "C" group received training during follow-up and showed significant improvement in depth (61% to 77%), placement (80% to 91%), and rate (118 to 122). includes the implementation of developmental milestones for each specialty. The milestones include five progressively advancing skill levels, with level one defining the skill level of a medical student graduate, and level five, that of an attending physician. Objectives: The goal of this study was to determine to what extent medical schools have prepared medical school graduates entering as emergency medicine interns to meet the level one milestones. Methods: An electronic survey was distributed to the interns of thirteen emergency medicine residency programs, asking interns whether they were taught and assessed on level one milestones. Results: Of possible participants, 113 of 161 interns responded (70% response rate). The interns represented all four regions of the country. The rates of EM level one milestones taught ranged from 61% of interns for ultrasound to 98% for performance of focused history and physical exam. A substantial number of students (up to 39%) reported no instruction on milestones such as patient disposition, pain management, and vascular access. Skills with technology including the role of the electronic health record and computerized physician order entry were assessed for only 39% of interns, and knowledge (USMLE) and history and physical was assessed in nearly all interns. Moreover, disposition, ultrasound, multitasking, and wound management were assessed less than half of the time. Conclusion: Level one milestones are the competency level expected of graduating medical students entering residency. Many entering EM interns have not had either teaching or assessment on these knowledge, skills, and behaviors, thus there is a gap in the teaching and assessment of level one milestones for EM interns. It is unclear at this time who will be responsible (medical schools, EM clerkships, or residency programs) for ensuring that medical students entering residency have achieved level one milestones. Stephen Leech 1 , Falk E. Flach 2 , Dominic Zigrossi 1 , Rene Mack 1 , Linda Papa 1 , and Anna Liberatore 1 1 Orlando Regional Medical Center, Orlando, FL; 2 University of Florida Shands-Gainesville, Gainesville, FL Background: The EM Milestones project is a joint project between the ACGME and ABEM. The EM Milestones are a matrix of the knowledge, skills, abilities, attitudes, and experiences to be acquired during specialty training in EM, and provide a basis for six-month evaluations for EM residents. Each milestone is a continuum with five levels, progressing from entry level skill to that of an experienced practitioner. Ultrasound (US) is an essential skill, and a procedural milestone was developed specifically for Goal-Directed Focused US (PC12). Suggested evaluation methods include observed structured competency evaluation (OSCE), direct observation, review of submitted images, a written exam, or a checklist. Objectives: We sought to evaluate the inter-rater agreement of PC12, using existing evaluation data. Methods: This is a retrospective cohort study of residents completing an US OSCE from 2009-2012. A competency-based curriculum has been in place since this time, and evaluation methods include direct observation, quality assurance review of submitted images, US scan log volumes, and OSCEs for FAST, aorta, cardiac, pelvic, and procedural US. Using existing evaluation data, two faculty assigned a milestone level (Level 1-5) and a milestone circle (1) (2) (3) (4) (5) (6) (7) (8) (9) for each resident, blinded to the other's rating. Data were analyzed using a 5x5 Conclusion: Inter-rater agreement of PC12 to assign EM Milestone Levels 1-5 using existing evaluation data is excellent. Inter-rater agreement for EM Milestone circle 1-9 is fair, although this did not affect which level was assigned. Existing evaluation data can be reliably used to assign Goal Directed Focused US EM milestone levels. Objectives: To determine which factors (self assessment, feedback, or both) are associated with the generation of LG that EM residents retain and implement after a standardized oral board examination. Methods: In this prospective educational study at four academic programs, 72 senior residents participated in a standardized oral board scenario administered by investigators whom all had primary teaching appointments. Following the scenario, residents completed a selfassessment form. Next, a standardized checklist was used to provide both positive and negative feedback. Subsequently, residents were asked to generate "SMART" LG (Specific, Measurable, Attainable, Realistic, and Timely). The investigators categorized the LG as stemming from the residents' self-assessments, feedback, or both. Within 2-4 weeks of the oral board scenario, the residents were asked to recall their LG and describe any actions taken to achieve those goals. Descriptive statistics were used to summarize the data. Quantitative data were expressed as the mean +/-SD, while nominal data were expressed as a percentage (frequency table) . Results: A total of 226 LG were initially generated (mean 3.1 +/-1.3). At 2-4 weeks, 10 residents were lost to follow-up, and the remaining 62 recalled 89 LG, of which 52 (58%) were acted upon. The sources of LG are summarized in the table. Additionally, a total of 36 novel LG were generated at the follow-up time interval, and of these, 15 (47%) were acted upon. Conclusion: oard scenario, EM residents generated the majority of their LG from their own self-assessments. However, after a 2-4 week follow-up period, they recalled a greater number of LG stemming from feedback, while the largest proportion of LG acted upon were those stemming from feedback that confirmed their own self-assessments. This would suggest that both self-assessment and feedback are critical factors in residents' ultimate execution of plans to improve performance. LG Recalled N=89 LG Executed N=52 Background: As part of the Next Accreditation System, the ACGME has endorsed the creation of specialty-specific milestones, the foundation of a new outcomes-based resident evaluation process. The milestones represent five levels of competence from entry level (medical school graduate) to expert. Beginning July 2013, EM residents will be evaluated on 23 milestones developed by the EM-milestone Task Force. Limited validation data on the milestones exist. Residents will be expected to meet Level 4 at graduation. It is unclear whether the higher levels represent the true competence of practicing EM attending physicians. Objectives: To examine how practicing EM attending physicians self-evaluate their performance on the new EM milestones in academic and community settings. Methods: A self-evaluation survey was compiled outlining the EM Milestones and was sent electronically to a sample of practicing EM attendings at four different institutions in academic and community settings. A subset of 9 of the 23 milestones was selected. Attending physicians were asked to identify which level was appropriate for them. Demographic data were collected. Data were collected using Survey Monkey and analyzed using Microsoft Excel to calculate proportions and 95% confidence intervals. Results: Seventy-nine attendings were surveyed with a response rate of 89%; 61% were academic, and 34%, 19%, and 47% were practicing for 0-5, 6-10, and greater than 10 years, respectively. 93% graduated from EM residency programs. Out of all responses, 23% (CI = 20%-27%) were level 1, 2 or 3, 38% (CI = 34%-42%) were level 4, and 39% (CI = 35%-43%) were level 5. 77% of attendings found themselves to be Level 4 or 5 in 8 of 9 milestones, while only 47% found themselves to be Level 4 or 5 in ultrasound (PC10) (p=0.0001). Conclusion: While a majority of EM attendings reported meeting level 4 milestones, a significant number felt they did not meet the level 4 criteria. Attendings report less perceived competence in ultrasound skills than other milestones. Further research is needed to assess whether the self-assessments reflect true competencies of practicing EM attendings. In addition, future research should examine correlation between achievement of milestones with board certification and subsequent quality of practice. Objectives: The goal of this study was to evaluate the relationship between the content of verbal communication, and subsequent patient knowledge of their discharge instructions. We conducted a prospective cohort study of 30 Englishspeaking adult patients with musculoskeletal back pain or skin lacerations presenting to an academic urban ED between June and August 2012. Entire ED visits were recorded with a digital audiotape and patient knowledge of discharge instructions was assessed during a follow-up phone call within 24-72 hours of discharge. Verbal communication content during the ED visit (discussion score, DS) and patient knowledge (knowledge score, KS) were scored based on established key teaching points for each diagnosis with a total score of 15 for back pain and 12 for laceration across four categories (diagnosis, home care, follow-up, and return to ED instructions). Two authors scored each case independently and discussed discrepancies before providing a final score for each category and corresponding total DS and KS. We used descriptive statistics and logistic regression for the analyses. The final data set included 18 patients with back pain and 12 with lacerations. The mean age was 42 years and 43% of the sample was female. The kappa statistic for coder inter-rater agreement was 0.72. Discussion (DS) and knowledge (KS) scores were consistently higher for laceration cases (median DS 10/12, IQR 3; KS 8/12, IQR 2.25) than back pain cases (median DS 7/15, IQR 3; KS 6/15, IQR 2.5). For both diagnoses, the median difference between DS and KS was -1 (KS less than DS) and approximately half of patients (9/18 back pain, 5/12 laceration) had a KS that was the same or higher than the DS. Age, sex, and literacy score did not predict those patients with a KS that was lower than the DS. Numerous studies have shown that patient satisfaction and quality of patient care are improved when proper medical translation is utilized. Many physicians who have had previous exposure to a second language but no formal medical interpretation training will provide their own translation. Objectives: To evaluate ED physicians who report self-translation for fluency and competence in medical translation for LEP patients. Methods: IRB approval was obtained for this study. All ED physicians (37 resident and 45 attending) in an urban, tertiary care academic ED were surveyed regarding their use of self-translation among LEP patients in the ED. Those who responded that they perform self-translation in the ED were contacted blindly by Interpreter Services and administered a telephonic exam, evaluating test takers on categories of conversational fluency, medical and health care terminology. Performance was determined by the examiner and rated "Outstanding", "Competent", "Transitional" or "Beginner". Means and confidence intervals were calculated where appropriate. Results: Seventy-seven ED physicians completed the survey, out of a Methods: This is a prospective observational study of patients ! 65 years presenting with falls to a tertiary care teaching facility. Patients were eligible if they were at baseline mental status (as per family or chronic care facility staff) and were not triaged to the trauma bay. At presentation, a data form was filled out by treating physicians regarding mechanism and position of fall, history of head strike, presence of new headache, loss of consciousness (LOC), and signs of head trauma. Unknown parameters (e.g. LOC) were conservatively analyzed presuming them to be present. Radiographic imaging was obtained at the discretion of treating physicians. Charts were subsequently reviewed to determine imaging results. All patients were called in follow-up at 30 days to determine outcome in those who were not imaged and to assess for delayed complications. Data were analyzed with stepwise logistic regression. This study was IRB-approved. Objectives: This is a descriptive analysis of elder fall patients who do not meet trauma alert criteria. We prospectively enrolled a convenience sample of patients presenting to a Level I community trauma center after falls who were not triaged to the trauma bay. Patients were eligible if they were ! 65 years, presented for fall, and were at baseline mental status as per family or chronic care facility. Emergency physicians completed a data form documenting time and position of fall as well as patient residence. Researchers then performed a retrospective review for results of diagnostics and patient disposition. All patients were verbally consented at evaluation for a follow-up phone call at 30 days to determine sequelae. Data were analyzed using descriptive statistics and chi-square. Objectives: We sought to identify the incidence of c-spine fractures in elderly patients ( ! 65) presenting with fall who are not triaged to the trauma bay. We attempted to determine historical and clinical features that predict c-spine fractures in these patients. The study site is a Level I trauma center. Emergency physicians (EPs) enrolled a convenience sample of elderly fall patients who were GCS 15 or at their baseline mental status per their family or sending facility. EPs completed data forms with the following information: place of residence, position prior to fall, history of striking head, signs of trauma to head, and presence of neck tenderness. Researchers then retrospectively reviewed the patient's record for results of diagnostics and outcomes. All patients were verbally consented at time of evaluation for follow-up phone call at 30 days to determine sequelae. A patient was determined to have no significant cspine fracture if he or she had a negative CT performed, or was admitted to the hospital with no sequelae at discharge, or if the medical record showed repeat ED visits with no ongoing neck complaints, or if he or she had no complaint at 30 days. Results: 787 patients were enrolled (mean age 83.6). 759 were followed up by phone, 20 were admitted and their charts reviewed as follow-up, 3 were followed up by establishing repeat ED visits, and 4 were lost to follow-up and not included in this analysis. One died in the ED without neck CT. This patient was conservatively included in the "positive" cohort. 329 patients (42%) Objectives: The current study determined the role of TNFR1 and TNFR2 in post-MI remodeling and investigated the molecular mechanisms that may contribute to TNFa's regulatory role in post-MI injury. Methods: Adult male wild type (WT), TNFa-/-, TNFR1-/-, and TNFR2-/-mice were subjected to MI via permanent coronary artery occlusion. Histological, biochemical, and functional analyses were performed at days 3 and 7 post-MI. Results: Compared to WT mice, MI injury was modestly reduced in TNFa-/-mice (P<0.05), markedly attenuated in TNFR1-/-mice (P<0.01), but significantly further enhanced in TNFR2-/-mice (P<0.05). Plasma TNFa is significantly increased following MI in WT, TNFR1-/-and TNFR2-/-mice (no significant difference between groups), but undetected in TNFa-/-mice. Adiponectin (APN), a potent cardioprotective cytokine, was significantly inhibited by MI at both mRNA and protein levels. TNFa deletion had no significant effect upon basal APN level, and partially restored APN expression/production post-MI (P<0.01 vs. WT). Basal APN levels were significantly increased in TNFR1-/-(P<0.05 vs. WT), and unchanged in TNFR2-/-mice. Importantly, suppressed APN expression/production by MI was markedly attenuated by TNFR1 deletion (P<0.01 vs. WT), but further exacerbated by TNFR2 deletion (P<0.05 vs. WT). Mechanistically, TNFR1 knockout significantly inhibited, whereas TNFR2 knockout further enhanced, TNFa-induced mRNA and protein expression of ATF3, a transcriptional factor known to significantly inhibit APN expression. Finally, exacerbated MI injury in TNFR2-/-mice was preferentially ameliorated by APN supplementation. We demonstrate for the first time that TNFa differentially regulates APN production via its two different receptors, thus contributing to divergent effects of TNF upon myocardial ischemic injury. reperfusion therapies are vasogenic edema and hemorrhagic transformation. The bioactive lipid, sphingosine-1-phosphate (S1P), is a potent regulator of vascular permeability via S1P receptors (S1PR). We previously found that S1PR2 promotes vascular permeability in skin, lung, and retina. Objectives: The purpose of this study was to investigate the role of S1PR2 in neurovascular integrity after ischemia-reperfusion (I/R) injury. Methods: Wild type (WT) and S1pr2-/-mice were subjected to middle cerebral artery occlusion (MCAO) to induce transient focal cerebral ischemia. JTE013 (30 mg/kg), a S1PR2 specific antagonist, was administered to WT mice by gavage, after reperfusion. Brains were collected for the evaluation of brain injury, total edema, blood brain barrier disruption (vasogenic edema) and hemorrhagic transformation. Mouse and human brain microvascular endothelial cells were used to determine tight junction proteins (TJP) levels and monolayer integrity after in vitro I/R injury. Results: S1pr2-/-mice and JTE013-treated WT mice exhibited significantly lower infarct (74% and 79.6% inhibition, respectively) and edema ratios 61.2% and 81.03% inhibition, respectively) compared to vehicle-treated WT mice. In addition, S1pr2-/-mice exhibited improved vascular integrity (i.e. decreased vascular permeability and less intracerebral hemorrhage) compared to wild type. In vitro, JTE013 prevented TJP degradation and promoted brain endothelial cell monolayer integrity after I/R injury. after reperfusion inhibits vascular permeability and intracerebral edema, and decreases brain injury in experimental stroke. S1PR2 could be pharmacologically targeted to promote neurovascular integrity at the time of reperfusion in stroke patients. represents an opportunity to start CPR prior to EMS arrival, but numerous caller-based and situation-specific barriers exist. Objectives: We sought to determine whether there is a difference in performance of bystander CPR between male and female 9-1-1 callers in adjudicated out-of-hospital cardiac arrest in an urban, fire-based ALS EMS system. Methods: Inclusion criteria: Out-of-hospital cardiac etiology cardiac arrests in which the caller was also the assessor for life signs. Exclusion criteria: arrests without 9-1-1 recordings, third party calls, disconnected calls, cases when EMS arrived prior to CPR instructions, or when the sex of the caller was unknown. Two trained/monitored abstractors performed a structured audio review of cardiac arrests using a 74-item standardized data tool and dictionary which were linked to outcomes from our city's Cardiac Arrest Registry to Enhance Survival (CARES) data. Our primary outcome measure was the proportion of calls in which any bystander CPR was performed. Our secondary outcome measure was the time to first chest compression for each group. We report proportions and 95% CIs as appropriate, and assess continuous time data using the t-test. Objectives: Pilot study to observe checklist implementation after the re-design of central line insertion bundles providing ED physicians with a checklist and required equipment to insert central venous catheters (CVCs). Methods: Observational convenience sample of urgent (non"crash") CVC insertions. Using simulated CVC insertions, observers were trained to identify checklist use and mark specific infection prevention and patient safety elements performed as specified by the checklist. Nurses notified observers of potential cases requiring CVC insertion. Observers were required to be present prior to CVC insertion to record checklist elements related to hand hygiene, procedure sterility, and patient safety. Proportions and 95% CI are reported. Objectives: To evaluate the generalizability of the standardized sterile blood culture technique, we evaluated its effectiveness for reducing contamination in a community hospital ED. We conducted an interrupted time series study in a community ED in Texas with 31,000 ED visits/year. During a 10-month baseline period (Jan-Oct, 2011), nurses collected cultures using usual- Figure: Abstract 509 care, clean technique. We introduced and implemented the standardized sterile technique during a 2-month transition phase (Nov-Dec, 2011) . During a 10-month intervention period (Jan-Oct, 2012), nurses used sterile technique, which included sterile gloves, 2% chlorhexidine skin antisepsis, a sterile fenestrated drape, and a procedural checklist. Cultures were classified as contaminated if a single culture collected in the ED grew a common skin contaminant organism. We evaluated effectiveness of the sterile technique using segmented linear regression analysis comparing the monthly percentages of cultures contaminated during the baseline and intervention periods with adjustment for secular trends. Results: During the baseline period, 165/3417 (4.8%) cultures were contaminated, compared to 92/3164 (2.9%) during the intervention period. In the segmented linear regression model, there was a trend toward higher contamination in the baseline period with an absolute 0.3% (95% CI: 0.04% to 0.6%) increase in contamination per month. The intervention was associated with an immediate 2.9% (95% CI: 0.7% to 5.2%) absolute reduction in monthly contamination after adjustment for the baseline trend (figure). After 8 months of the intervention and 5 consecutive months of contamination < 2%, contamination increased during the last two months of the intervention period to 3.3% and 4.4%. All personnel were retrained at that point. 100,000 patients die annually due to medical errors. Since that publication, national efforts have identified key strategies to reduce medical errors including systematic error analysis. Accordingly, our ED has implemented a formal peer review committee (PRC) process to analyze potential errors. Objectives: To assess EM physician perceptions of the value of a PRC in the analysis and understanding of medical errors. Methods: A highly structured PRC process was implemented to analyze medical errors. ED patient management concerns were initially screened by the PRC chair. If error could not be ruled out with "100% certainty", the chair solicited the involved practitioners for feedback, deidentified the records and brought the case to the PRC for review. The PRC was open to all ED physicians and residents, but a core group attended regularly to ensure consistency. The PRC reviewed each case for six types of systems errors and six types of practitioner-based errors. Majority vote determined if any errors had occurred. Findings of the PRC were used to guide systems error prevention strategies and department-wide educational opportunities. For the PRC to maintain eligibility for CME credit, participants had the opportunity to submit anonymous "activity assessments" after each meeting. Five questions assessed perceptions of the error analysis and educational value of the PRC process. Subject responses were retrospectively analyzed and descriptive statistics were calculated. Results: In 23 months, 221 EM physicians and residents participated in 22 PRC sessions -mean attendance: 10.0 (range: 5-17, SD: 2.9). 163 surveys were completed (response rate: 82.3%). (As the CME activity director, the PRC chair was not eligible to complete the survey so his attendance was not included in the response rate.) The survey responses are summarized in the table. Conclusion: EM physicians perceived the PRC to be a highly valuable error analysis and educational activity. 100% of the time, the PRC process identified errors and improved understanding of systems contributors to error. Subjects reported that the PRC process would lead to a change in their practice the majority of the time and that their competency in the six ACGME core competencies improved most significantly in patient care, systems based practice, and medical knowledge. Background: It is estimated that nearly 100,000 patients die annually from medical errors. Improving our understanding of medical errors is crucial in order to prevent them. Accordingly, a formal ED peer review committee (PRC) process was implemented to analyze potential errors. Objectives: To characterize the types medical errors occurring in the ED. Methods: A highly structured PRC process was implemented to analyze medical errors. ED patient care concerns were initially screened by the PRC chair. If error could not be ruled out with "100% certainty", the chair solicited involved practitioners for feedback, de-identified the records, and brought the case to the PRC for review. The PRC was open to all ED physicians and residents, but a core group of attending physicians attended regularly to ensure consistency. The PRC reviewed each case for six types of systems errors (SEs) and five types of practitioner-based errors (PEs). Majority vote determined if any errors occurred. The PRC further analyzed each case in which an error occurred to determine if the error(s) definitively caused patient harm. Findings of the PRC analysis process were retrospectively analyzed and descriptive statistics were calculated. Results: Over a period of 18 months (~100,000 ED visits), 207 errors were identified, including 136 SEs (65.7%) and 71 PEs (34.3%). Table 1 summarizes the errors. The most common SEs related to teamwork and the most common PEs were classified as cognitive errors. Teamwork errors (ED or hospital) accounted for 47.8% of all errors. The PRC process identified 9 cases in which harm was determined to have definitively resulted from medical error. These cases are summarized in Table 2 . Conclusion: ED SEs occur almost twice as frequently as PEs, and teamwork errors are by far the most common, accounting for nearly half of all errors identified. Of 9 cases in which harm resulted from errors, 8 had multiple errors contributing, thereby supporting the "Swiss cheese model" of error causation. In 7 of the 9 cases, teamwork played a factor, and only in 1 case did a PE in isolation lead to harm. These results strongly suggest that while PE reduction should play some role, ED error reduction strategies should focus primarily on preventing SEs and more specifically on improving ED and hospital teamwork. Methods: This was a retrospective before and after study at an urban Level I trauma center of the first 6 months in 2010 after and the corresponding 6 months in 2009 before the process change. Adults admitted from the ED to the MICU were included. We collected age, sex, race, ED, MICU, and hospital LOS, mechanical ventilation duration, acute physiology and chronic health evaluation (APACHE) scores, and mortality from electronic medical records [emergency department information manager (EDIM) and ICUTrackerâ]. Differences in LOS were compared using linear models. Hospital and MICU LOS, and ventilator durations were log-transformed before analysis. Logistic regression was used to compare in-hospital mortality. Analyses were adjusted and unadjusted for age and sex. T-tests were used to assess change in APACHE scores before and after the policy change. Objectives: To determine whether students rotating in the ED can see a high percentage of a predetermined, comprehensive set of CCs. We also sought to identify deficiencies in the student experience and to compare the experience of senior (SMS) vs. non-senior medical students (NSMS). We hypothesize that students would be able to encounter the majority of the predetermined CCs and that SMS would experience more of these CCs than NSMS. Methods: A CC log was designed to standardize the clinical exposure of students in the EM rotation at an academic medical center with an ACGME-accredited EM residency program. All students were asked to see and track 38 predetermined cases (figure). Log entries were confirmed by attending or fellow signatures. Students were excluded if they did not complete the rotation or were from medical schools outside of the US. SMS were defined as students who had completed internal medicine and surgery clerkships prior to their ED rotation. NSMS were students who had not completed these rotations. Descriptive statistics and non-parametric tests were used. The study took place from August 2011 to August 2012. Results: Sixty-two SMS and 40 NSMS completed logs representing 3876 potential cases. Overall, students logged 75.6% (CI=74. 2-76.9 ) of cases that they were directed to see. SMS saw 76.7% cases, and NSMS saw 73.8% cases (p=0.353). Eight CCs were seen by greater than 90% of students. 100% logged abdominal and chest pain. Four CCs were seen by less than 50% of students. 13. ) of students were able to complete the logbook in full. Conclusion: It is possible for most students to have a uniform, comprehensive clinical experience. SMS and NSMS saw a similar total number of CCs. There was no CC that was seen more frequently by SMS or NSMS. Our tool was able to identify concrete strengths and deficiencies in CC-specific clinical exposure from which meaningful curriculum change can be implemented. Objectives: The authors sought to present a comprehensive review of emergency medicine clerkships as they currently exist in U.S. medical schools, to include general program characteristics including the number of required clerkships, use of the national curriculum guide, didactic content, supplemental learning materials, clinical experiences, and current methods used for assessment and grading. Methods: A descriptive survey was utilized. An up-to-date database of clerkship directors representing the 128 Liaison Committee on Medical Education (LCME) accredited U.S. medical schools was constructed using established databases (SAEM) as well as personal phone calls to the remaining schools with unknown clerkship status. All clerkship directors were asked to complete an online questionnaire. Direct telephone contact was made to nonresponders. Data were analyzed using descriptive statistics. The response rate to the survey was 83.6%. Fifty-two percent of medical schools now require students to complete an emergency medicine clerkship. The clerkship usually lasts 4 weeks and takes place in the fourth year of medical school. The mean length of shifts is 9 hours and the mean number of shifts per rotation is 14. Most clerkships offer didactic training, with a mean of 18 hours of lecture per rotation. Approximately 60% of respondents report that both residents and attending physicians precept students. Assessment of students is comprised primarily of clinical performance evaluations and end of rotation written tests, weighted at 66.8% and 24.5% respectfully. Sixtyfive percent of respondents use criterion-based grading, 15.9% use normative grading, and 14% use a pass-fail system. Conclusion: Currently more than half of all US medical schools now require emergency medicine clerkships during medical training. The data provide a comprehensive overview of training in emergency medicine at the undergraduate level. It provides a description of emergency medicine clerkship logistics; didactic and clinical content; and methods of assessment, feedback, and grading. Objectives: To determine if electronic medical student evaluations (EMSE) are more comprehensive, completed more frequently, or more precise compared to paper shift cards evaluations. Methods: This is a before-after cohort study conducted over a 2.5 year period at an academic Level I trauma center affiliated with an EM residency program and medical school that accepts fourth year students rotating through the ED. We implemented an EMSE directly into the ED electronic tracking system to replace the traditional paper shift cards system. Paper evaluations from prior to implementation and EMSE were collected and analyzed using Microsoft Excel. All evaluations had a word count performed on free text. Means, standard deviations, and T-tests were calculated using SPSS 21.0. Results: There were a total 135 paper evaluations for 30 students and 570 EMSE for 62 students. There was an average of 4.8 (SD 3.2) evaluations completed per student using the paper version when compared to 9.0 (SD 3.8) evaluations completed per student electronically (<0.0001). There was an average of 8.8 (SD 8.5) words of free text evaluation on paper evaluations when compared to 22.5 (SD 28.4) words for electronic evaluation (p <0.0001). Conclusion: EMSE significantly increased the number of evaluations compared to paper evaluations when integrated into the ED tracking system. The electronic evaluations provided more information about the individual learner's strengths and areas for improvement with more free text assessment compared to paper shift cards. Methods: This is a prospective, multi-institutional study in which senior medical students at four separate training sites were randomized by month to receive daily text messages containing clinical pearls in EM (intervention group = IG) or to receive identical information within the existing framework of their institution's standard curriculum (control group = CG). Texts were sent to the IG by Cel.ly, a free online instant group text messaging service. Pearls were selected based on the core content of EM. Students participated in 2-week or 4-week rotations (though each received the same number of texts if they were in the IG). At the end of the rotation students took a standardized post-test (SAEM National EM M4 Exam Version 2). A two-way ANOVA was used to examine the effect of the intervention while statistically controlling the effect of 2-vs. 4-week rotations. Results: Sixty-five students were enrolled at four clinical sites: 32 received texts (IG) and 33 did not (CG). Fifty-five students participated in a 4-week rotation; 10 in a 2-week rotation. The mean end of rotation score for IG students was 71.7 (95% CI 68.5-74.9), and for CG students was 70.2 (95% CI 65.7-74.8). This trend towards higher test scores does not yet reach statistical significance. The use of social media for educational content delivery is a curricular innovation. A text enhanced curriculum is easy to develop and administer at little cost of time and energy and adds a novel way of communicating information to students. The diverse nature of our multiple institutions demonstrates that this modality can be applied in any educational setting. Additional investigation is needed to show an effect on students' test scores for this curricular innovation. Objectives: To evaluate student perceptions of resident and attending contributions to their EM clerkship across a spectrum of educational objectives. All fourth year students rotating in our academic ED in 2011-12 were asked to participate in this prospective study. Students completed anonymous surveys after shifts where all cases were supervised by both resident and attending physicians. Students were asked to estimate teaching contributions from residents and attendings with regards to nine ACGME educational objectives. Responses were plotted on a 100 mm visual analog scale, where -50 represented only resident contributions, 0 represented half resident/half attending contributions, and 50 represented complete attending contributions. Mean scores were compared to the null value of 0 using a one-sample t test. Results: All 65 students participated in the study. Of 385 possible shifts, 274 surveys were collected (71% response rate). Of the nine ACGME educational objectives, students perceived that attendings provided more evidence-based teaching (5.5, 95% CI: 1.9, 9.1, p<0.001) . In contrast, students perceived residents provided more education with regard to clinical knowledge (-4.5, 95% CI -7.3, -1.7), chart documentation (-8.0, 95% CI: -12.0, -4.0), bedside teaching (-8.6, 95% CI: -12.0, -5.2), throughput (-13.0, -16.4, -9.6 ), interactions with other health care providers (-13.5, 95% CI: -17.7, -9.4), patient oral presentations .0), efficiency , and procedural teaching (-20.2, 95% CI: -24.0, -16.5), all p<0.01. The majority (87%) reported working with both resident and attending physicians important to their educational experience. Conclusion: Medical students perceive most educational objectives, except for evidence-based teaching, as being taught more by residents compared to attending physicians in the ED. This lends quantitative support for robust "resident as teachers" curricula in EM. Methods: This was an observational secondary analysis of a prospective cohort of patients treated by EMS providers following OHCA in Oregon from 1/1/2010 to 12/31/2010. Records for patients with signed POLST forms in the statewide database were matched to EMS records using probabilistic linkage. Our primary exposure variable was a documented DNR order in the POLST registry. We evaluated resuscitation therapy, interventions, and calls to the POLST registry in the prehospital and ED settings for concordance with end-of-life wishes documented in POLST forms. We used descriptive statistics to analyze the sample. Results: 1,577 patients were treated by EMS for OHCA, and 82 patients matched to a previously signed POLST record. Of these patients, 50 documented a DNR order and 32 requested resuscitation be attempted. Of the 32 patients with "Attempt CPR" orders, 27 (84%, 95% CI 67%-95%) received prehospital resuscitation compared to 11 (22%, 95% CI 12%-36%) of those with a DNR order. Five (10%) patients with DNR orders had resuscitation ceased prior to ED transport and 3 (6%) had resuscitation ceased in the ED. Only 3 patients with DNR orders (6%, 95% CI 1%-17%) survived to hospital admission, as compared to 12 (38%, 95% CI 21%-56%) of those requesting CPR. In 4 cases, resuscitation of patients with DNR orders was ceased prior to ED transport after contacting the POLST database. No patients with pre-existing DNR orders survived to hospital discharge. Conclusion: Patients found in OHCA with existing DNR orders had good concordance of resuscitation care with POLST orders by prehospital and ED providers. The POLST registry also allowed correct discontinuation of unwanted resuscitation. These findings suggest that emergency care providers are able to access key POLST information in a timely basis for high-acuity patients and use this information to guide care consistent with patient wishes. Background: Out-of-hospital cardiac arrest (OHCA) is a global concern. Survival rates in Singapore are low (2%) compared to USA or Europe (up to 40%). Objectives: We aim to study the effect of various interventional strategies to improve the survival rates for OHCA over the past 10 years in Singapore and identify which strategies are most effective. Resuscitation Epidemiology (CARE) project (October 2001 (October -2004 and the Pan-Asian Resuscitation Outcomes Study (PAROS) project (April 2010 -June 2011). All events occurred in Singapore. Survival outcomes were adjusted for non-modifiable and relevant modifiable risk factors. Analysis was performed and expressed in terms of the odds ratio (OR) and the corresponding 95% CI. Differences in resuscitation efforts were expressed in terms of the p value. A multivariate logistic regression model for survival to discharge was also implemented to identify strategies with significant impact. Conclusion: Women were 44% less likely to survive an OHCA than men. Some of the factors leading to the gender differences in survival (such as bystander CPR rate, and hence, potentially the rate of shockable rhythms) may be modifiable. Thus, future study is needed to identify effective ways to reduce barriers to survival in women. Background: ED analgesia delays may be reducible by administering pain medication via routes that are faster-acting than oral (PO) drugs but which do not require parenteral injection. One such administration route that has been used for "rapid absorption" is the buccal tablet. Objectives: This study's goal was to compare the fentanyl buccal tablet (FBT) with standard PO therapy (oxycodone; control, CTL) for the primary endpoint of achievement of significant pain relief within 10 minutes of drug administration. Methods: Design: In this double-blind, placebo-controlled trial, 39 convenience-sampled patients meeting eligibility criteria were asked if they wished "higher-dose" pain medication (2 tablets of 5 mg oxycodone PO or 200 mcg FBT) or "lower-dose" (1 tablet of 5 mg oxycodone PO or 100 mcg FBT). The study was conducted in an urban teaching ED with annual census approximately 55,000, from October 2012 to 2013. Analysis: The primary study endpoint was achievement of at least 2 points' reduction in numeric pain rating scale (NPRS) within 10 minutes' study drug administration. Higher-and lower-dose cases were combined in a single analysis. Skewness-kutosis test was used to assess for normality. Where data were found non-normal, central tendency was assessed with medians and IQRs, and analysis was conducted with nonparametric Kruskal-Wallis testing. Central tendency for normal data was reported as means AE SD; these data were analyzed with the t-test. For categorical data, proportions were calculated with binomial exact 95% CIs. Fisher's exact test was used for intergroup comparisons. Univariate logistic regression was used to calculate the odds ratio (OR) for likelihood of FBT versus CTL cases to reach the primary study endpoint of significant pain relief within 10 minutes. Results: Groups CTL and FBT were similar with respect to age, sex, ethnicity, and initial pain score. No major side effects were seen in either group, and minor side effects (mild headache, itching, wooziness) occurred with similar (p = 1.00) frequency in CTL and FBT groups. FBT achieved significantly (p = 0.01) higher rates of NPRS reduction by the 10-minute endpoint. Conclusion: Preliminary results from this ongoing trial indicate promise for FBT as a safe means to provide relatively rapid and significant pain improvement without requirement for injections or IV access. Objectives: This study seeks to identify costs incurred on patients and EDs associated with care of uncomplicated dental pain in the ED. This quantification can serve as a point of reference to determine more effective approaches to address dental complaints from a public health policy standpoint, and to increase efficiency of ED utilization. Methods: A retrospective chart review was performed at the University of Kentucky ED, a tertiary referral center in Central Kentucky. The following ICD9 codes were used to identify patients most likely to present with uncomplicated dental pain from 04/1/10-04 /1/12: 522.0, 522.5, 522.7, 522.9, 525.9, 525.10, 521.00, 521.81 . Other ICD9 codes were excluded in an effort to limit complicated cases and trauma. 1,801 cases from all demographic groups were matched with billed physician and hospital fees. Compensation rates were noted and compared with compensation rates for all comers to the ED for the same period. Data were also compiled by entities responsible for payments. For example "patient responsibility" represented self-pay/uninsured patients. Results: For the selected dental visits a total of $249,242 was billed for physician services. $23,790 of that amount was collected for a compensation rate of 9.6%. $1,600,915 was billed for hospital fees, of which $171,162 was collected for a compensation rate of 10.7%. During the same period, compensation rates of physician fees for all patients who presented to the ED was 38.4%, while compensation rates of hospital fees for all ED patients was 21.5%. CIs. There was a large and very significant increase in competency following the PGY1 rotation, followed by a small but significant decrease in competency by the beginning of third year. Competency increased significantly, but to a smaller extent, over the course of the third year, with all residents rated as competent by the end of residency. Conclusion: US competency, as measured by an OSCE, increased significantly over time, with the largest increase occurring after an introductory PGY1 US rotation. Although there was small drop by the beginning of the third year, competency was achieved by residency graduation with additional experience. OSCE appears to be a valid method to evaluate resident competency in ultrasound over time. Methods: This is a multi-center retrospective observational trial of culture-positive skin abscesses. Patients with skin and soft tissue infections were imaged with US at presentation. Patient characteristics and wound culture results were included in an electronic database. Ultrasound images were digitally recorded and reviewed. Each set of images was blindly reviewed by an experienced sonographer to determine the presence of pre-determined image characteristics. In a subset of patients, agreement was determined using a kappa analysis. A third sonographer performed adjudication in cases of disagreement. Multivariate analysis (stepwise logistic regression) was performed to determine the US features associated with MRSA. Results: 1023 of the 3499 patients who presented with abscesses during the study were imaged with ultrasound. Of those, 510 underwent culture and sensitivity of the purulence. 196 of the 510 patients with culture results were positive for MRSA. Abscess cavities were analyzed for shape (round vs irregular vs indistinct), presence of abscess cavities, amount of debris in cavity (percent of total cavity), and size. For statistical analysis, ordinal data (i.e. size) were converted to nominal data (i.e. large vs small) prior to analysis. A kappa analysis demonstrated good agreement for abscess size and amount of debris, and moderate agreement for presence of soft tissue fluid, shape, and visibility of abscess edge. Stepwise logistic regression was performed to determine which variable or group of variables independently predicted MRSA in a multivariate model. See results in the table. Conclusion: There are sonographic features of abscesses that are associated with MRSA. MRSA skin abscesses are more likely to be small, indistinct single cavity abscesses filled with debris. Our meta-analysis demonstrated that bedside echocardiography cannot be used as a reliable single diagnostic tool in the evaluation of the patients with suspected ACS. However, it may have significant clinical use when combined with other risk stratification algorithms with other imaging techniques in the evaluation of ED patients with chest pain. A negative bedside echo in a patient with a pre-test probability of 10% provides a post-test probability at 4% or less. Overall, the last two decades of research suggest bedside echocardiography has the potential to enable more accurate risk stratification for low-risk chest pain, which would improve the triage of patients to their appropriate levels of care and reduce costs. Objectives: This study compares these US techniques and physician gestalt of the level of dehydration to the gold standard, percent weight change before and after volume resuscitation, in children. Methods: A prospective observational study was conducted in an urban pediatric ED (PED) between June 2011 and November 2012. Children with signs or symptoms of possible dehydration who were to receive IV fluid resuscitation were enrolled. Patient weight, US measurements of the IVC and Ao, and clinical variables, including gestalt of the treating senior level clinician, were recorded before and after fluid resuscitation in the PED and hospital floor (if the patient was admitted). The percent weight change from presentation to discharge from the hospital (PED or otherwise) was used to calculate the degree of dehydration. A weight change of ! 5% was considered clinically significant. We calculated sensitivity and specificity and constructed ROC curves for the ultrasound techniques and physician gestalt. Results: A total of 108 patients were enrolled. 89% had mild dehydration and 11% had significant dehydration. An IVC:Ao ratio cutoff of 0.8 had a sensitivity (SN) of 67% and a specificity (SP) of 72%, inspiratory IVC collapse of more than 50% had a SN of 92% and a SP of 11%, and a physician gestalt cutoff of 5 (on a scale of 1-10) had a SN of 42% and a SP of 67% for the detection of significant dehydration in children (table) . ultrasound technique (IVC:Ao ratio and inspiratory IVC collapse respectively), we found that inspiratory IVC collapse may be as sensitive a measure of dehydration in children as it is in adults. In this study, both the IVC:Ao ratio and the physician gestalt were poor predictors of actual level of dehydration. Objectives: To determine baseline measurements in physeal plate widths and assess variation in the measured widths between contralateral sides in healthy, uninjured children. Methods: This is a prospective observational study of a convenience sample of healthy patients between ages 0 and 12 years presenting to the pediatric emergency department. A focused ultrasound of the distal tibia, fibula, radius, and ulna was performed bilaterally (eight total). Measurements were taken at the physeal plates in the longitudinal plane at the widest distance. The degree of variance of physeal plate widths within an individual and the average values of physeal plates for each bone were calculated. Results: A total of 95 patients were enrolled in this study. Results: Of 7,771 unique citations identified, 78 were selected for full text review resulting in 4 trials assessed for quality. Agreement between authors' QUADAS-2 scoring was good (kappa = 0.63). Three trials were deemed to have a low risk of bias. They enrolled ED-based patients (N=199) and evaluated clinician-performed bedside OUS using either 7.5 MHz or 10 MHz linear array probes. The prevalence of retinal detachment ranged from 13%-38%. The AUCs ranged from 0.943 to 1.00; the summary AUC was 0.957. Sensitivity and specificity ranged from 97%-100% and 83%-100% respectively (figure). Conclusion: Bedside OUS has a high degree of accuracy in identifying retinal detachment based on three small prospective investigations. A larger prospective validation of these findings would be valuable. Shnar 2011 Yoonessi 2010 Objectives: The purpose of this research project was to understand which procedures were perceived to be minimal versus high-risk. Further, we wish to investigate which procedures were perceived to need simple versus informed consent. Our hypothesis is that certain high-risk procedures will not carry the perception of needing informed consent. Methods: We surveyed emergency medicine residents and attending physicians to determine whether a procedure had significant or minimal risk. From this list of procedures, EPs were asked which type of consent should be required. The procedures listed in the survey are on Figure 1 . Results: A total of 70 physicians responded to the online survey (results in Figures 1 and 2) . A one proportion z-test analysis was performed to compare the responses "yes this is high risk procedure" vs "yes this procedure requires informed consent". The resultant analysis showed that there was a statistically significant difference between perception of risk and the type of consent required for the following procedures: central line placement (p<0.0001), procedural sedation (p<0.0001), Foley catheter placement (p<0.0043), suturing (p 0.0461), and CT scan with contrast (p<0.001). We discovered discordances between EP perception of risk and the necessity for informed consent. Intuitively, high-risk procedures should prompt physicians to use higher standards of consent prior to these procedures. While there may not be consensus regarding which procedures are high-risk, one would expect concordance between perceived high-risk procedures and the necessity to obtain informed rather than simple consent. Conversely, there was no discordance noted between minimal risk procedures and the necessity to obtain simple consent. Based on our results, EPs are comfortable not consenting for minimal risk procedures, but demonstrate an incongruous practice with high-risk procedures. are less likely to agree to participate in research at our institutions, despite being offered more frequent opportunities to participate. Theory suggests that discordance between the race and sex of researcher and participant might contribute to lack of willingness to participate, but there is little empirical evidence to support the contention. We hypothesized that when approached by someone of the same race or sex, the patient would be more willing to participate than when race and sex were discordant. To determine the influence of study personnel race and sex on the agreement to participate in research. Objectives: We hypothesized that < 50% of potential research participants would read all sections of a hypothetical informed consent document and that they would rate certain sections as more important than others. Methods: DESIGN: Prospective online survey. PARTICIPANTS: Healthy people recruited via Amazon's Mechanical Turk from 6/2012-10/2012. INTERVENTIONS: Respondents were asked to imagine that they were being approached for a hypothetical ED research study -a randomized trial of severe hypertension treatments. On sequential pages, respondents could choose to read a section of an informed consent document or skip it. At any point the respondent could stop and decide whether or not he or she would participate in the hypothetical trial. We recorded respondents' choices and their ratings of the importance of each section to their decision on a 0-100 scale. Simple proportions and median ratings with IQR were calculated. Results: 273 respondents were recruited. Demographics were 62% male, 5% African American, and median age 26 (IQR 22, 33) years. Eight percent of respondents read all 14 sections. The median number of sections read was 1 (IQR 0, 5). Overall, the item "What are the risks of being in this study" was rated as the most important (median 74.0, IQR 24.5, 93.0). "How many people will take part in this study" was rated least important (median 16.5, IQR 2.3, 42.0). The item "What are the risks of being in this study" was rated as the most important for both participators (median 72.0, IQR 21.5, 93.5) and non-participators (median 81.0, IQR 35.0, 90.5) in the hypothetical trial. Conclusion: Potential research participants do not read all the items in an informed consent document. Certain sections of a typical informed consent document are more important to potential participants' decisions. In particular, the risks of the study was rated the most important to potential participants' decisions. Further work should determine more optimal ways to explain this information to actual ED patients. Academic Objectives: The goal of this study was to determine the ability of EM faculty members to accurately predict resident EMITE scores (and thereby assess deficiencies in MK) before results return. Methods: In this IRB-exempted study, the authors asked EM faculty at the study site to predict the 2012 EMITE scores of the 50 EM residents two weeks before results were available. Predictions were entered and stored anonymously in an online survey program. The primary outcome was prediction accuracy, defined as the proportion of predictions within 6% of the actual score. Six percent was selected because this was the average standard deviation of scores on the 2011 EMITE. The secondary outcome was prediction precision, defined as the mean deviation of predictions from the actual scores. The authors assessed specific faculty background variables, including years of experience, educational leadership status, and clinical hours worked, for correlation with the two outcomes. Results: Thirty-two of the 38 faculty (84.2%, 95% CI 69.6 to 92.6) participated in the study, rendering a total of 1,600 predictions for 50 residents. Mean resident EMITE score was 81.1% (95% CI 79.5 to 82.8). Mean prediction accuracy for all faculty participants was 69% (95% CI 65.9 to 72.1). Mean prediction precision was 5.2% (95% CI 4.9 to 5.5). Education leadership status was the only participant background variable correlated with the primary and secondary outcomes (Spearman's q = 0.51 and -0.53, respectively). Four participants achieved very high (>80%) prediction accuracy and three participants achieved very tight (<4%) prediction precision. are mandated to take COMLEX, feel they must also incur the cost of taking USMLE in order to be fairly evaluated by allopathic program directors who feel that COMLEX may not predict performance in an allopathic program. Objectives: The purpose of this study is to determine the predictive value of COMLEX scores in assessing performance on EM in-training (IT) exams (ABEM and ACOEP). Our hypothesis is that there will be a direct positive relationship between performance on COMLEX and performance on IT exams. Methods: This observational, retrospective chart review involved collecting data from a convenience sample of 86 osteopathic residents that graduated from the 4-year dually certified EM residency program at Einstein Medical Center, Philadelphia. All residents took COMLEX 1 and 2, and all but three took at least seven IT exams during residency. Bivariate regressions at each post-graduate (PG) year with IT scores as outcomes and each COMLEX score as covariate were analyzed. In addition, single multivariable regressions with the IT scores as outcomes and COMLEX 1 and 2 scores together as variables were performed at each PG year. Results: There was a significant linear correlation between performance on COMLEX 1 (p <0.01 for all exams) and COMLEX 2 (p <0.0005 for all exams) and ABEM and ACOEP IT exams. The correlation for both exams was stronger with COMLEX 2 (ABEM adj R 2 =50% PGY1 to 34% PGY4, ACOEP 32% to 13%; figure) than with COMLEX 1 (ABEM adj R 2 =24% PGY1 to 7% PGY4, ACOEP 9% to 2%). Conclusion: There is a significant correlation between performance on COMLEX 1 and 2 and performance on both ABEM and ACOEP EM IT exams. The correlation was strongest between COMLEX 2 and the ABEM exam. As would be expected, given that COMLEX 2 is a more clinically oriented exam, COMLEX 2 is a better predictor of performance than COMLEX 1. These data suggest that performance on COMLEX 2 could be used by residency directors in both allopathic and osteopathic programs to predict resident performance. Objectives: We sought to measure the discriminatory ability of PVI to predict FR as measured by Swan-Ganz catheter in cardiac surgery patients undergoing passive leg raise. Our hypothesis was that PVI would correlate well with FR in intubated patients but not spontaneously breathing patients. Methods: Prospective observational study of postoperative cardiac surgery patients with pulmonary artery catheters in the cardiothoracic ICU of a major academic medical center. A fingertip pulse oximetry sensor was applied to measure PVI (Masimo, Irvine, CA). Vital signs, PVI, and cardiac index were collected before, during, and after passive leg raise (PLR). FR was defined as an increase in cardiac index of greater than 10% at any point during PLR. Using prior estimates of expected correlation, a=0.05, and b=0.8, we estimated a sample size of 23 per group. We calculated AUC using the Wilcoxon method to assess the discriminatory ability of PVI. Objectives: We assessed the association of specific findings on the initial ED ECG with subsequent 30-day cardiac events after syncope. We hypothesized that the magnitude of association would vary by specific ECG findings. Methods: This is a retrospective study of older adults (age>=60 years) who presented to one of three EDs with syncope or nearsyncope. All were members of a regional health system, which allowed for complete ascertainment of subsequent death or health service use. To eliminate diagnostic test bias, we excluded patients with a serious underlying cause of syncope that was identified during the ED visit (e.g. ED ECG diagnostic for arrhythmia). Outcomes were combined 30-day cardiac events, including cardiac death, arrhythmias, myocardial infarction, new diagnosis of obstructive structural heart disease, or major invasive cardiac procedure. Cardiac deaths were identified through state death files. Non-fatal outcomes were identified through chart review by physician investigators. All patients had ECGs that were over-read by cardiologists. Research staff noted the presence or absence of eight ECG findings in the cardiology over-read. Events were modeled using multivariate logistic regression. The study cohort included 2,871 patients accounting for 120 cardiac events. Agreement between administrative data and physician review for cardiac death was good (K=0.7). Inter-rater reliability of chart review for non-fatal outcomes and ECG findings was high (K=0.9). Adjusted odds ratios (95%CI) were: non-sinus rhythm, 3.6 (2.3-5.5); left bundle branch block, 2.6 (1.1-6.0); abnormal conduction intervals, 1. Background: Non-specific chest pain (CP) is the fourth most common reason for ED visits in the US with aneurysmal aortic disease representing 13,000 deaths annually. Thoracic aneurysmal disease is associated with aortic dissection, a time-sensitive diagnosis with 1% increase in mortality every hour undiagnosed. As focused cardiac ultrasound (FOCUS) becomes a more common point-of-care (POC) test for evaluation of CP, it may be a useful modality in assessing patients for aortic dilation and help identify those who need additional diagnostic imaging or operative management. Previous retrospective analysis by emergency physicians (EPs) with significant ultrasound experience showed FOCUS to be accurate in assessing aortic measurements. Objectives: To prospectively compare POC FOCUS to CT angiography (CTA) in the measurement of ascending aortic dimensions by providers with various skill levels. To determine the diagnostic accuracy of FOCUS for the detection of thoracic aortic dilation and aneurysm with CTA as the reference standard Methods: A prospective cohort presenting to an urban, academic ED who had both FOCUS and CTA for suspicion of thoracic aorta pathology were enrolled in a convenience sample. Proximal thoracic aorta dimensions were measured by EPs of various skill levels in ultrasound. CTA measurements were obtained by a radiologist blinded to the FOCUS results. Bland-Altman plots with 95% limits of agreement were used to demonstrate agreement for aortic measurements, kappa statistics were used to assess agreement between modalities, and intraclass correlation employed for inter-and intra-observer variability. Using a cutoff of 40 mm, we additionally calculated the sensitivity and specificity of FOCUS for aortic dilation. To date 32 patients have been enrolled. Mean (AESD) age was 52 (AE18) years and 31% were male. The mean difference (95% limits of agreement) for the Bland-Altman plots was 0.6 mm (-6.3 to 7.5). Sensitivity and specificity (95% confidence interval [CI]) between FOCUS and CTA for the presence of aortic dilation at the 40 mm cutoff were 0.50 (95% CI = 0.02 to 0.97), 0.93 (95% CI = 0.77 to 0.99). The correlation by Pearson product was 0.63. Interclass correlation (ICC) was 0.69 and ICC for intraobserver variability was 0.93. In this prospective study, FOCUS demonstrated good agreement with CTA measurements of maximal thoracic aortic diameter. Background: The ED is a common destination for patients with acute lower extremity symptoms concerning for deep venous thrombosis (DVT). Evaluation is challenging, due to inadequacy of clinical exam to identify clot, lack of 24-hour duplex Doppler ultrasonography, and lack of familiarity with the most common pretest probability assessment tool for DVT, the Wells score. Optimal identification of low-probability patients could expand use of D-dimer testing yielding improved efficiency. Objectives: We report use and precision of pretest probability for components of the Wells DVT score as well as empiric clinician judgment. Methods: Prospective, observational, single-center ED study including all patients evaluated for DVT. Exclusions: known DVT patients on warfarin with INR >2.0 or other non-aspirin anticoagulation, DVT diagnosed within last 1 month regardless of INR, or known results of duplex Doppler. Two clinicians (attending, resident, or mid-level) independently evaluated each patient prior to test results and completed a web-based data collection instrument including components of the Wells Score and overall clinician empiric estimation of DVT probability (<10%, 10-19%, ! 20%). Structured medical record review determined outcome. The kappa statistic was calculated to assess inter-observer agreement. Results: Ninety patients were enrolled; mean age was 53 years, 67% female. Acute DVT was diagnosed in 13% (95% CI 7-22%), and acute PE in 3% (0.7-9%). Kappa values that were good (>0.60) were: active cancer (0.88), leg immobility (0.70), previous DVT (0.93). Kappa values that were poor (<0.20) included: pitting edema, and superficial collateral veins. All other kappa values were fair to moderate. Kappa for binary empiric unstructured gestalt (<10% vs. ! 10%) was 0.35 with 67% agreement. Prevalence of acute DVT/PE in the empiric <10% group was 3/39 (7.7%). Acute DVT/PE in the Wells 2 group was 9/71 (12.5%) (95% CI for difference: -6-16%). Conclusion: This preliminary work suggests highest precision exists for the historical components of the Wells DVT score rather than the physical exam components. Additional work with increased sample size is needed to compare accuracy of the Wells DVT score in aggregate vs. empiric gestalt estimation. However, the clinical utility of both may be limited by low inter-observer agreement. Outpatient have examined ED referral rates for outpatient follow-up, but none have ascertained patient follow-up rates, ongoing hypertension, or outpatient management. If a substantial number of patients with no prior history remain hypertensive and require pharmaceutical intervention, this strongly supports ED referral efforts for the prevention of hypertension-associated morbidity. Objectives: To describe the characteristics of ED patients with elevated BP and to determine the rates of ED referrals for BP management, outpatient follow-up, and initiation of antihypertensive therapy in the outpatient setting. Methods: We conducted this prospective, cohort study in an academic, inner-city ED in 2012. Patients were eligible if ! 18 years, were not pregnant, had an elevated BP (SBP ! 140 or DBP ! 90 mm Hg) at triage and on reassessment, and were discharged. Research associates performed BP reassessments immediately after the initial physician contact, prior to any medication administration, and also collected demographic, socioeconomic, and primary care physician (PCP) access data. All participants were contacted at 30 days to determine if they had made follow-up appointments or seen their PCPs. Student's t-test and chi-square testing were used for the analyses. Results: Of 6,031 patients who were screened, 439 were eligible and 230 were discharged from the ED and included in the analysis. Mean age was 49.5 (15.3) years, 122 (53%) were female, 118 (51%) were black, and 41 (18%) were Hispanic. There were 71 (31%) patients with no prior history of hypertension and 159 (69%) with known, but uncontrolled hypertension. Patients with no prior history of hypertension had a lower mean SBP (157 vs 165 mm Hg, p=0.001) and were less likely to have PCPs compared to those with known hypertension (72% vs 86%, p=0.01). Patients did not differ with respect to age, sex, DBP, or insurance status. Discharge instructions, 30-day follow-up, and outpatient management are presented in the table. Conclusion: ED-based referrals for outpatient BP management remain low; however, in patients who do follow up with their PCPs, over one third of those with no prior history are found to be hypertensive and over half of patients with a history of hypertension remain poorly controlled. Our findings strongly support robust ED referral efforts for outpatient BP management in this cohort. Characterizing Intimate Partner Violence Victims Unwilling or Unable to Participate in a Follow-up Intervention Justin Schrager, Sheryl Heron, Shakiyla Smith, and Debra Houry Emory University School of Medicine, Atlanta, GA Background: Health screening kiosks have been shown to anonymously and safely detect intimate partner violence (IPV) victimization among women in the emergency department (ED). However, the population of women who screen positive for IPV but decline to participate in safety interventions has not been fully described. Objectives: To describe the personal and behavioral characteristics of women who screen positive for IPV on an ED-based computer health kiosk but were unwilling to participate in a follow-up program. We hypothesized that women of higher SES and white race would be more likely to forego enrollment in an intervention program. Methods: All non-critically ill, adult, English-speaking women triaged to the waiting room of three EDs (large urban, medium community, and small academic) during study hours (weekdays, 11AM-7PM 6/2008-12/2009) were eligible. Women were approached by study coordinators to participate in the general health screening survey at a computer kiosk. The validated Universal Violence Prevention Screening Protocol was used to screen for IPV. Women who screened positive for IPV were given a pamphlet of resources and asked to enroll in a follow-up study for a 3-month period. Covariates of interest included income, race, and several validated behavioral health questionnaires. Chi-square, independent t-tests, and logistic regression models were used to statistically compare participants. Results: 1,474 women were screened for IPV, and 263 screened positive (17.8%). 154 (58.6%) of those who screened positive enrolled in the follow-up intervention. Women willing to participate in followup were significantly (p< 0.05) more likely not to be employed, to have a low household income, to be enrolled at the urban safety net hospital, and to abuse drugs. Enrollment was not associated with age, race, marital status, education, perceived level of health, having a regular doctor, depressive symptomatology, or tobacco or alcohol use. Conclusion: ED-based computer kiosk health screening with the aim of enrolling patients in an IPV intervention captures women with low household income who share some of the most vulnerable sociobehavioral characteristics; however, it was not as successful in enrolling women with higher income and lower socio-behavioral risk factors. Objectives: We hypothesized that a geographic information systems (GIS) based technique could identify less violent areas and characterize the resilience factors associated with their relative security. Methods: Public data from the Boston Police Department were compiled into a list of police reports for violent incidents (robbery, assault and battery, and murder) that occurred in the city of Boston between August 1st 2009 and July 31st 2010. The address for each incident was then geocoded (using Tiger Street Maps and ArcGIS 9.3) to latitude/longitude coordinates and then tabulated by census block group. For each census block group, we also tabulated a variety of sociodemographic factors (immigration, income, education, and others from the US Census) and elements from the built environment (greenspace, commercial zones, public transit, and liquor stores from a variety of sources). Univariate and multivariate Poisson analyses were performed examining the association of the various resilience factors with rates of violence (Stata 7.0). Results: 7122 violent incidents were reported in the time period surveyed. Census block groups averaged 269.2 incidents per square mile per year (with a range of zero to 1843). Univariate analyses found strong associations between lower violence and higher income, education, employment, owner occupancy, and less public transit. Two multivariate analyses were performed. First a base model was generated using median income and population density to predict the rate of violent incidents. A broader multivariate analysis performed much better and found strong associations of violence/security with family structure, non-vacant housing, education, and access to public transit. Conclusion: Violence in the urban environment is spatially distributed in a non-random pattern. Geographic analysis can identify resilience factors that are associated with lower levels of violence. Objectives: To determine the prevalence and risk factors for AAD and CDAD in patients discharged from the ED after receiving or being prescribed antibiotics. Methods: We enrolled adults being discharged from an urban academic ED after receiving and/or being prescribed antibiotics over a six-month period in 2012. Data were obtained in the ED and by telephone 28 days after antibiotic completion. Diarrhea was defined as any loose stools. AAD was defined as >3 loose stools per day for > 2 days. CDAD was defined by a positive Clostridium difficile culture. The frequency of diarrhea and AAD are reported in means and relative risks were determined by Fisher's exact test. than patients not given IV antibiotics. Patients who received multiple antibiotics were also more likely to get AAD (26.1%, 95% CI 6.7, 45.5) versus single antibiotic therapy (7.0%, 95% CI 0.0, 14.9; p=0.031). Clindamycin had a higher incidence of diarrhea (31.8%, 95% CI 10.7, 52.9) and AAD (27.3%, 95% CI 7.1, 47.5) compared to all other oral antibiotics combined (9.5%, 95% CI 0. 1, 18.8, p=0 .025 and 7.1%, 95% CI 0.0, 15.3, p=0.028). IV antibiotic administration had a 5.32 (95% CI 1.2, 23.6) p=0.023 relative risk of AAD, while multiple antibiotics had a 3.73 (95% CI 1.0, 21.4) p=0.05 relative risk of AAD. One case of CDAD was observed in a patient who received both IV and multiple antibiotics. Conclusion: Patients who received IV antibiotics had much higher rates of diarrhea and AAD. Patients who received multiple antibiotics or clindamycin also had higher rates of AAD. With the rate of CDAD being as high as 10-20% among patients with AAD, consideration should be taken when prescribing IV antibiotics, multiple antibiotics, or clindamycin. 2011 by an urban academic center in a region of low HIV prevalence. Individuals at high risk or with known HIV were recruited by a targeted HIV testing program in the ED and its affiliated infectious diseases center. These index cases provided access to social networks via compensated, coupon-based, peer-referral for HIV testing. Social contacts recruited by indexes could participate as next generation index cases if they were also high-risk or HIV-positive. We reviewed hospital records to determine whether people tested by the peer-referral program also had study-site ED visits or HIV tests within the previous five years. : The social network program tested 466 patients with 4 (0.9%, 95% CI 0.3%-2.3%) diagnosed as positive. Participants were 80% African-American, mean age was 42 years (SD 13), and 58% were uninsured. Reported risks included high-risk heterosexual activity (35%), exchange of sex for money or drugs (24%), IV drug use (14%), men who have sex with men (11%), and sex partner with HIV (6%). Overall, 158/466 (34%, 95% CI 29%-40%) had no prior visits to the ED. Of those with prior ED visits, 213/308 (69%, 95% CI 60%-79%) had never been tested by the ED HIV testing program. Conclusion: Social network testing may provide a valuable adjunct to ED screening by increasing access to high-risk, uninsured populations beyond those usually encountered in the ED setting. Onethird tested by this social network program were previously unknown to the ED, and two-thirds had not been tested by the ED's testing program. Methods: This exploratory qualitative study was conducted from 7/ 1/11 to 6/30/12 in a Level I trauma center. ED patients ! age 18 with penetrating trauma and enrolled in VIAP were eligible for the study. A random list of eligible clients was generated. A trained non-VIAP qualitative interviewer obtained consent and conducted 20 in-depth, semi-structured interviews based on feasibility. Interviews were audio taped, transcribed, de-identified, coded, and analyzed using NVivo 10. Thematic content analysis consistent with grounded theory was used to identify themes related to client experience and perception of VIAP. VIAP's effect was assessed through changes in attitude and life circumstances pre/post-injury. Inter-rater agreement was calculated to assess consistency of coding. Results: Twenty subjects were interviewed. Most were male 14/20 (70%) and African American 15/20 (75%), reflecting the overall VIAP clientele. Coder agreement was excellent (kappa>0.90). Most subjects 17/20 (85%) perceived their advocate as a caring adult in their lives and cited some aspect of peer-support model that helped establish a trusting relationship with their advocate 13/20 (65%). Positive attitude shift, improved confidence, and desire to follow and accomplish goals postinjury were major themes that demonstrated VIAP's effects (17/20, 85%). Every subject noted important services provided by VIAP advocates (e.g. counseling, assistance in housing, jobs, educational resources). having visited the ED on a weekend as compared to a weekday (OR 0.96, 95% CI 0.93-0.99). The three most common ED discharge diagnoses were renal disease (OR 3.4, 95% CI 2.5-4.5), CHF (OR 3.0, 95% CI 2.6-3.5), and non-infectious lung disease (OR 2.9, 95% CI 2.3-3.7). There were no hospital characteristics associated with a bounceback admissions. We found 4.6% of discharged older Medicare patients from California EDs to have bounce-back admissions within 7 days. We found patients with possibly a greater disease burden such as the oldest old and residents of skilled nursing facilities to be especially at risk for a bounce-back admission. We also found incomplete evaluations that result in leaving against medical advice to bounce-back. Our findings suggest that quality improvement efforts focus on these high-risk individuals. and visit level measures of ED crowding. The outcome was unscheduled hospitalizations within 7 days of ED discharge. System-level metrics included exposures to ED occupancy, boarding time, and external length-of-stay (LOS). Visit metrics included waiting and evaluation time, as well as total ED LOS. Covariates included demographic characteristics, comorbidities, Emergency Severity Index level, vital signs, ED discharge diagnosis, time variables, and ED site. For each crowding measure we fit linear and non-linear multivariable logistic regression models using quadratic cubic terms Results: The study cohort contained a total of 625,096 ED visits among 625,096 patients. There were 16,957 (2.7%) patients with 7-day bounce-back admissions. Compared to a median evaluation time of 2.2 hrs, an evaluation time of 10.8 hrs was associated with a relative risk of 3.9 (95% CI 3.7-4.1). Compared with a median ED LOS of 2.8 hrs, ED LOS of 11.6 hrs, was associated with a relative risk of 3.5 (95% CI 3.3-3.7). None of the other ED measures were associated with the outcome. Conclusion: Evaluation time and ED LOS are associated with increased 7-day bounce-back admissions. Our findings suggest that the Medicare measure of ED LOS in discharged patients is confounded by illness severity and is an unreliable measure of ED quality. Use Objectives: To evaluate whether the use of TB for ED patients with LHL affects patient satisfaction. Methods: We performed a prospective randomized controlled study of patients discharged from an urban, academic ED between 6/ 1/12 -8/15/12. All English-speaking patients ! 18 years old with a score of 6 on the Revised Rapid Estimate of Adult Literacy in Medicine (REALM-R) were eligible. Exclusion criteria included aphasia, psychiatric chief complaint, clinical intoxication, known dementia, mental handicap, or insurmountable communication barrier. Patients were randomized into TB vs. standard discharge instructions groups. At discharge, we conducted a structured interview including four items derived from the AHRQ's validated Consumer Assessment of Healthcare Providers and Systems (CAHPS) questionnaire. Questions included: 1) whether the medical team explained things in a way that was easy to understand, 2) whether the medical team spent enough time with the patient, 3) satisfaction with the quality of the discharge instructions provided, and 4) whether the patient would recommend this ED to friends and family. All questions had threelevel responses which were dichotomized for two questions due to low cell numbers. Interviewers were blinded to patient responses made on a tablet computer. Differences were analyzed between groups using the chi-square test. Results: Among 129 and 122 patients in the control and TB groups respectively, no differences were observed in age, race, education level achieved. No differences were observed between groups in satisfaction with 1) understandable explanations (p=0.568), 2) time with the patient (p=0.412), 3) quality of the discharge instructions provided (p=0.682), and 4) whether the patient would recommend this ED to friends and family (p=0.999). In this single-center study, TB did not result in improvement in patient satisfaction. Evaluation of Patient Understanding of ED Discharge Instructions: Use of a Scale to Assess Self-Efficacy to Carry Out Discharge Instructions Luke A. Stevens 1 , Breanne K. Langlois 2 , James A. Feldman 3 , Patricia M. Mitchell 3 , and Thea L. James 3 Objectives: To determine whether providers assess patients' ability to follow ED D/C instructions and to determine patients' confidence, ability, and willingness to care for their medical problems and follow D/ C instructions by testing a SE tool. Methods: This was a cross-sectional, prospective survey of ED patients post-D/C in an urban, academic, Level I trauma center. Research assistants obtained consent and conducted structured interviews from a convenience sample. We included English speaking adults, ! age 21 discharged from the ED from 6/12-11/12. The SE tool measured subjects' confidence and ability to manage their medical problems (scale 0-100) and likelihood of following D/C instructions (5 point likert scale). All interviews were audio-taped, transcribed, coded, and analyzed using NViv10. Descriptive statistics were generated using SAS 9.1. Results were dichotomized to show complete confidence, ability (SE score=100 vs. <100), and likelihood of following D/C instructions (score = 5 vs. <5). Themes were pre-determined, based on the scales. For SE response < 100, subjects were asked to explain why. Main reasons were identified by grounded theory. We enrolled 118 subjects with a mean age of 44; 67/118 (57%) males. 67 (57%) black, 20 (17%) white, and 13 (11%) Hispanic. Most (115, 97%) felt they understood their D/C instructions fully. Providers in the ED did not ask about ability to follow D/C instructions in 67% of patients, yet patients reported they were willing (75%) and able (78%) to adhere. Despite this, almost half (46%) were uncertain they could care for their medical problems. The main reasons were: physical limitations (31%), unpredictable circumstances (13%), and habits and addictions (15%). Conclusion: Few patients were asked by providers at time of D/C if they could follow their D/C instructions. Many patients indicated uncertainty that they could adhere to D/C instructions. The use of a SE tool can identify patient-level barriers that providers can address at the time of D/C. knowledge deficits following emergency department (ED) discharge are most significant for topics related to home care and return to ED instructions, as compared to diagnosis and follow-up instructions. Objectives: The goal of this study was to determine the content of verbal communication of discharge information during an ED visit. We conducted a prospective cohort study of 30 Englishspeaking adult patients with musculoskeletal back pain (BP) or skin lacerations (SL) presenting to an academic urban ED from June to August 2012. Entire ED visits were recorded with a digital audiotape. The content of verbal communication was scored based on established key teaching points for each diagnosis with a total score of 15 for BP and 12 for SL across four categories (diagnosis, home care, follow-up, and return to ED instructions). Two authors scored each case independently and discussed discrepancies before providing a final score for each category and a total discussion score (DS). Verbal content for each category was considered adequate if 50% or more of the teaching points were discussed, otherwise it was considered deficient. We used descriptive statistics for the analyses. Results: The final data set included 18 patients with BP and 12 with SL. The mean age was 42 years and 43% of the sample was female. Discussion scores (DS) were consistently higher for SL cases (median DS 10/12, IQR 3) than BP cases (median DS 7/15, IQR 3). All four categories were discussed in 83% of SL cases (10/12) with two cases failing to address home care and/or return to ED instructions. All four categories were discussed in 33% of BP cases (6/18). Verbal communication of return to ED instructions was absent in 39% of BP cases (7/18) and deficient in 78% (14/18). Discussion of home care and follow-up instructions was absent in 22% of BP cases (4/18) and deficient in 56% (10/18) for home care and 44% (8/18) for follow-up. demonstrates a greater quantity of discharge information for skin lacerations (procedure-oriented diagnosis) than for back pain. Deficiencies in verbal communication mirror previously identified patterns for patient knowledge deficits with the greatest deficits found for return to ED instructions, followed by home care and follow-up instructions. Background: Few interventions are known to modify the negative effects of limited health literacy (LHL) on patient outcomes. The teachback (TB) method, where patients are prompted to state back in their own words their comprehension of information, has promise to improve communication, and is recommended by the AHRQ as a "universal precaution," but has not been tested in the emergency department (ED). Objectives: To measure patient comprehension using TB compared to standard discharge instructions. We performed a prospective, randomized controlled trial of English-speaking patients > 18 years old in an urban academic ED between 6/1-8/15/12. Eligible patients had LHL as determined by scores < 6 on the REALM-R (Rapid Estimate of Adult Literacy in Medicine-Revised). Exclusion criteria included altered mental status, known dementia, aphasia, or insurmountable communication barrier, clinical intoxication, sexual assault, or psychiatric chief complaint. Nurses received standardized training using TB. Patients were randomized to standard or TB instructions, which were audiorecorded and followed by a structured interview assessing comprehension in: 1) diagnosis, 2) ED course, 3) medications, 4) follow-up, and 5) return instructions. Concordance between audiorecorded responses and review of the medical record was ranked on a five-level scale using methodology by Engel et al. We analyzed differences between groups using the chisquare test. Inter-rater reliability for concordance estimates was evaluated with the kappa statistic. Objectives: We hypothesized that receiving prior treatment for pain would not independently increase the participation rate for patients approached for cardiac biomarker ED research studies. Methods: DESIGN-Retrospective observational cohort study. SETTING-academic urban tertiary care ED with annual census of approximately 70,000 visits. PARTICIPANTS-Patients who were approached for enrollment into one of two IRB-approved, cardiac biomarker research studies between 12/2010 and 11/2011. To be eligible, patients had to be 18 years of age, present with chief complaint of chest pain, and have experienced chest pain within 12 hours of presentation. Trained clinical research coordinators approached eligible patients from 8 am-10 pm on weekdays with intermittent sampling on weekends. OBSERVATIONS-Pain treatment data were abstracted from electronic medical records by a single reviewer who was blinded to study hypothesis. Patient demographics and participation outcomes were recorded from a research screening log. DATA ANALYSIS-Simple descriptive statistics were calculated and a multivariate logistic regression model was created with participation as outcome, pain treatment as a binary predictor variable, and age, race, sex, and self-reported pain score as confounding variables. and clinical response to a fixed initial dose (1 mg) of IV hydromorphone. We hypothesize that the response to a fixed dose of opioid would decrease as BMI increases if an association exists. Methods: A prospective interventional study was performed. A convenience sample of adult ED patients with acute pain deemed to require IV opioids by the attending physician received 1 mg of IV hydromorphone. Pain scores (numerical rating scale, NRS) were recorded at baseline, 15 min, and 30 min after medication administration. Pain relief, satisfaction with analgesia, desire for more analgesics, and side effects (nausea, vomiting, rash, SatO2 < 92%, SBP < 90 mmHg), as well as patients' characteristics (age, sex, race/ethnicity, educational level, pain duration, location, and mechanism (injury vs. non-injury)) were documented. Weight and height were measured. We used ANOVA to compare mean NRS scores and multivariable linear regression to control confounding variables. Results: 1438 patients were screened and 163 patients were enrolled in the study. The mean age was 39 years, 62% were female, 57.1% Hispanic and 24.5% African American. Obese patients (BMI ! 30) were significantly older and more were female compared to normal-weight (BMI< 25) and overweight patients (BMI ! 25,<30). There was no association between BMI and pain reduction at 15 min and 30 min after hydromorphone administration as shown in the table. Similarly, pain relief, satisfaction, desire for more analgesics, and frequency of side effects were not associated with BMI. There was no association between NRS reduction and BMI after adjusting for age, sex, race/ ethnicity, pain duration, and mechanism. Conclusion: These results do not support adjustment of initial IV hydromorphone dose based on BMI. Background: Adequate treatment of acute pain in many ED patients remains an elusive goal. One potential strategy is to administer a large bolus of opioid while another strategy is to administer a rapid, two-step opioid titration protocol. Objectives: To compare a high initial dose of 2 mg IV hydromorphone against titration of 1 mg IV hydromorphone followed by an optional second dose. Methods: Patients aged 21-64 years with severe pain were randomly allocated to 2 mg IV hydromorphone in a single bolus or the "1+1" hydromorphone titration protocol. "1+1" patients received 1 mg IV hydromorphone followed by a second 1 mg dose 15 minutes later if they answered "yes" when asked, "Do you want more pain medication?" The primary outcome was the between-group difference in proportions of patients who declined additional analgesia at 60 minutes. Results: 350 patients were enrolled. The proportion who declined additional analgesics was 67.5% in the 2 mg bolus arm and 67.3% in the "1+1" titration arm (difference 0.2%; 95% CI: -9.7%, 10.2%). Although NRS pain scores at 60 minutes had decreased substantially from baseline in all study patients (reduction of 6.1 +/-3.1 NRS units in the 2 mg IV hydromorphone bolus arm and 5.7 +/-3.0 NRS units in the "1+1" hydromorphone titration arm), the between-group difference in improved NRS pain scores from 0 to 60 minutes was only 0.4 NRS units (95% CI: -0.3, 1.1). The incidence of side-effects was similar. 42.3% of "1+1" patients achieved satisfactory analgesia at one hour with only 1 mg hydromorphone. Conclusion: An initial 2 mg IV dose of hydromorphone is not superior to the "1+1" titration protocol. The 2 mg hydromorphone bolus protocol and the "1+1" hydromorphone titration protocol had virtually identical efficacy and safety profiles. Compared to the 2 mg bolus protocol, the "1+1" titration protocol had an opioid-sparing effect as 50% less opioid was needed to achieve satisfactory analgesia for 42.3% of patients allocated to this protocol. This suggests that the "1+1" hydromorphone titration protocol should be considered in preference to a 2 mg bolus of hydromorphone within the first hour for ED patients with acute severe pain. Background: Low-dose ketamine has analgesic effects. Opioids are used for moderate and severe pain in the ED; however, they can cause serious adverse effects, especially with repeated doses. Ketamine may be an effective alternative for acute pain and thus far a prospective trial in the ED has not been reported. Objectives: To compare the percentage change in pain score over 120 min between low-dose ketamine (LDK) and morphine (MOR) for acute pain in the ED. We performed a randomized, prospective, doubleblinded trial in the ED from Mar-Nov 2012 at a tertiary, Level I trauma center. A convenience sample, ages 18-59 years, with complaints of moderate or severe abdominal, flank, low-back, or extremity pain were enrolled. The subjects were consented and randomized to IV LDK (0.3 mg/kg) or IV MOR (0.1 mg/kg). Numerical rating scores (NRS), Richmond Agitation Sedation Scores (RASS), and vital signs were recorded at 5, 10, 20 min, then every 20 min until subjects requested a third dose, were discharged from the ED, or underwent procedural sedation, or 120 min had elapsed. Adverse events and repeat doses were assessed. Our primary outcome was the percentage decrease in NRS from baseline at 20 minutes. We chose a sample size of 18 subjects per group based on an 80% power to detect a 2-point change in NRS scores between treatment groups with estimated group standard deviations of 2 and an alpha of 0.05. We used a repeated measures linear model with adjustments for treatment group, time, and the group by time interaction with an auto-regressive covariance structure. Results: Thirty-six subjects were enrolled (MOR 16, LDK 20) . Demographic variables were similar -mean age (28, 29 years), sex (male 50%, 40%), baseline vitals, and NRS scores (7.4 vs 7.4) (p>0.05). There were no significant differences in adverse events (LDK 55%, MOR 44%, p=0.5), RASS scores at any interval, or repeat dosing (LDK 55%, MOR 25%, p=0.09). There was a reduction in the LDK group NRS at 5 min (67%, 42%, p< 0.001) and 10 min (62%, 49%, p=0.004). At 20 min scores were similar (LDK 47%, MOR 48%). From 40-120 min, MOR reduced NRS scores more, and reached a maximum difference at 120 min (LDK-48%, MOR-72%, p= 0.002). Conclusion: Ketamine and morphine reduced pain scores equally at 20 minutes. Ketamine reduced pain scores more than morphine at 5 and 10 min, and provided a nearly 50% decrease in NRS pain scores for 2 hours. However, from 40-120 min morphine was more effective. has been reported to safely and effectively stratify low-risk ED chest pain patients. Furthermore, patients with 'negative' cCTAs have low major adverse cardiac events (MACE) rates (death, MI, revascularization) in the following year. However, all research relating to ED cCTAs has been published out of academic institutions whose results may not extrapolate to community hospitals. Objectives: Our objective is to describe the prognostic value of a negative 128-slice cCTA for MACE at one month, six months, and one year following initial evaluation at a community hospital. Methods: This was a retrospective chart review of all patients who underwent cCTA as part of a low-risk ED chest pain evaluation between January 1, 2010 and December 31, 2010 at our community hospital. All cCTAs were performed with a 128-slice CT scanner. Based on prior literature, a cCTA was considered as 'negative' if there were no lesions demonstrating >= 50% stenosis. According to The Society of Cardiovascular Computed Tomography Guidelines Committee (SCCT), all cCTA results were further divided into no disease, and minimal (<25%), mild (25%-49%), moderate (50%-74%), and severe (75%-99%) stenosis. Using a preformed data extraction sheet, we collected data at the index visit, 30 day, six month, and one year time periods. Our primary outcome measure is the presence of MACE at 30 days, six months, and one year from the index visits for those patients whose cCTAs were negative. Data were analyzed using descriptive statistics and 95% CIs were attained using the modified Wald method. Results: A total of 228 patients had cCTAs during the study period. Ten patients were excluded from the final analysis: two with uninterpretable cCTAs, and eight patients with 'positive' cCTAs. The results of the remaining 218 cCTA studies and subsequent MACE rates are described in the table. There was one patient with mild disease on cCTA who had an NSTEMI and revascularization within six months. This patient was evaluated by cardiology at the index visit and recommended to have a stress test, but refused. Our overall MACE rate for 'negative' cCTA patients within one year was 0.5% [95% CI 0.0% -2.8%]. Conclusion: In our community hospital, low-risk ED chest pain patients with 'negative' cCTAs have very low events rate one year after initial presentation. Objectives: To determine the diagnostic performance of a high sensitivity troponin I (hs-TnI) assay in discriminating significant coronary artery stenosis risk among candidates for ED CCTA. Methods: Consecutive patients evaluated for ACS at an urban ED on weekdays, from 9 AM until 9 PM, were prospectively enrolled in an observational cohort study. This report focuses on the sub-cohort of patients who presented between January 20 th 2012 and July 31 st 2012 and received CCTAs as part of their ED evaluation. Patients receiving CCTA were judged by treating clinicians to be low-intermediate risk, had no known history of coronary artery disease, no diagnostic ECG changes, and contemporary troponin I values (Beckman Coulter Access AccuTnI assay) of less than 0.06 ng/ml. hs-TnI was measured using Abbott Laboratories' ARCHITECT high sensitive troponin-I assay. CCTAs were read by board certified radiologists and cardiologists, and categorized into: no plaque, plaque with <50% stenosis, 50-69% stenosis, and ! 70% stenosis. Significant stenosis is defined as 50% or greater. Results: A total of 142 subjects (78 females and 64 males) were evaluated. Median age was 53.3 years (IQR: 46.4 -59.0). 84 (59.2%) had no plaque, 41 (28.9%) had at least one artery with plaque with less than 50% stenosis, 11 (7.8%) had at least one artery with 50-69% stenosis and 6 (4.2%) had at least one artery with ! 70% stenosis. The median hs-TnI (pg/ml) was higher in patients with significant stenosis than those without significant stenosis (6.4 [IQR: 4.7-13.8] vs 3.6 [IQR: 1.9-5.8], p<0.01). hs-cTnI can discriminate significant stenosis risk with a cstatistic of 0.78 (95% CI: 0.68-0.88). At a 3.3 pg/ml cut-off, the sensitivity for diagnosing significant stenosis is 100% and the specificity is 44%. Using this cut-off, 55 of the 125 (44%) CCTAs with no significant stenosis could have been avoided. Conclusion: hs-TnI values can identify candidates for ED CCTA who have a low risk for significant coronary artery stenosis. Background: Coronary computed tomographic angiography (CCTA) is an accurate and non-invasive imaging modality for evaluation of low-to-intermediate risk ED chest pain patients with suspected acute coronary syndrome (ACS). However, its use in the elderly (ages 75 and greater) has been discouraged due to concerns that high calcium scores would severely limit its interpretation Objectives: We determined the rate of non-diagnostic CCTA studies and the frequency and severity of coronary artery obstruction in elderly ED patients with chest pain. Methods: Study Design: Prospective, observational. Setting: Tertiary care, academic suburban medical center with regional heart center and an annual ED census of 90,000. Patients: ED patients ages 75 and greater with low-to-intermediate risk chest pain with suspected ACS undergoing CCTA between 8/10-8/11. Patients with diagnostic ECGs, elevated cardiac troponin, and a history of CAD, CABG, or PCI were excluded. Interventions: CCTA was performed using a 64 or 320 slice scanner. Outcomes: The quality of the CCTA images was classified as excellent (no artifacts), good (moderate artifact), adequate (moderate artifacts but still interpretable), and nondiagnostic (severe artifacts rendering interpretation impossible). The degree of coronary artery obstruction was classified as normal (no obstruction), non-obstructive (1-49% obstruction), or obstructive (50% or greater obstruction). Data analysis: Descriptive statistics. Conclusion: CCTA imaging quality is excellent to adequate in most elderly ED patients with chest pain and suspected ACS, and more than half have normal or non-obstructive CAD. The quality of imaging is not associated with coronary calcium scores. One Objectives: We hypothesized that patients with normal or minimal disease on coronary CTA have a less than 1% rate of cardiovascular death or nonfatal myocardial infarction over one year. Methods: We prospectively evaluated 1207 consecutive patients who received coronary CTA in the ED. Patients with cocaine use, significant co-morbidity reducing life expectancy, and those found to have significant disease (stenosis > 50%; or ejection fraction <30%) were excluded (n=270). Using a reliable and valid structured data collection instrument with excellent inter-rater reliability, we collected demographic characteristics, medical and cardiac history, laboratory values, and ECG results. Patients were followed by telephone contact and record review for one year. The main outcome was 30-day cardiovascular death or nonfatal myocardial infarction. Data analysis, and the ability to meet our hypothesis, was pre-specified to be was based upon the upper limit of the 95% confidence interval being less than 1%. Results: 937 patients presented with potential ACS, received coronary CTA and were found to have <50% maximal stenosis and were included. Patients had a mean age, 47.3 (8.3) years; 66% were black; 59% were female. 91% had normal/nonspecific ECGs. 91% had TIMI scores < 2. Following coronary CTA, 91% of patients were discharged. Over the subsequent year, only 13% were re-hospitalized and 15% received further stress tests or catheterization. There were 3 deaths (0.37%; 95% CI, 0.07-0.93%): one MVC, one used amphetamines, one pulmonary embolism. There were 0 AMI (0%; 95% CI, 0-0.47%), and no revascularization procedures (0%; 95% CI, 0-0.49%). Conclusion: Patients who present to the ED with potential ACS who have negative coronary CTAs have a very low likelihood of cardiovascular events (less than 1%) over the ensuing year. Methods: This was a retrospective multi-center cohort study of ED visits from all 18 non-military hospitals in San Diego County (pop 3.2 million) between 2008 and 2010 using data from the California Office of Statewide Health Planning and Development. Patients without identifiers and those less than 18 years of age were excluded. Psychiatric-related visits were defined by the following primary diagnoses ICD-9-CM codes: 290.0-302.9, 306.0-316.9. Frequent ED users were defined as having four or more ED visits over any consecutive 12-month period and were further classified by the number of psychiatric associated visits: frequent psychiatric users ( ! 4 visits), occasional psychiatric users (1-3 visits) , and non-psychiatric users (0 visits). Charlson Comorbidity Index (CCI) scores were calculated for each patient to assess comorbidity and were compared among frequent user groups; differences in proportions and 95% CIs are presented. Results: There were 788,005 patients with 1,764,599 total ED visits during the study period. 9.1% (n=71,661) of patients were identified as having four or more visits in any consecutive 12-month period, accounting for 36.6% (n=646,544) of all visits. Among these frequent ED users, 80.1% had no psychiatric visits, 16.6% had one to three psychiatric visits, and 3.3% had four or more psychiatric visits. Compared to non-psychiatric users, a significantly lower percentage of occasional and frequent psychiatric users had CCI scores of 3 or higher (40.1% vs. 26.8%, difference= 13.3%, 95% CI = 12.4%, 14.2%; and 40.1% vs. 24.2%, difference = 15.9%, 95% CI =14.1%, 17.6%, respectively). patients with acute asthma have at least one other ED asthma visit in the past year; frequent ED utilization is associated with non-white race, public/lack of insurance, and several markers of chronic asthma severity. Little is known about these issues in other developed countries. Objectives: To identify characteristics of adult asthma patients in Japan with high numbers of ED visits. We conducted a multicenter chart review study of ED patients, age 18 to 54 years, with acute asthma during 2009 to 2011. Cases were identified using the International Classification of Disease, 10th Revision, Clinical codes J45.xx. Participating sites were 23 urban and rural EDs across Japan. Each site investigator reviewed 60 randomly selected charts after training with a 1-hour lecture and practice charts, which were assessed versus a criterion standard. If accuracy was <80% per chart, the individual was retrained. We classified subjects into four ED utilization groups based on the number of ED visits for asthma in the past year: no prior use, 1-2 ED visits, 3-5 ED visits, and 6+ ED visits. We assessed associations between patient factors and the number of ED visits using bivariate and multivariate statistics. Results: Among 1002 patients with outcome data (73% of total): 784 had no ED visits in the past year (78%), 165 with 1-2 visits (17%), 33 with 3-5 visits (3%), and 20 with 6+ visits (2%). Frequent ED visits were associated with several markers of chronic asthma severity (history of systemic steroid use, hospitalization, and intubation for asthma), current asthma medications (inhaled steroid, methylxanthine), and hospitalization for asthma in the past year (all P<0.001). In a multivariate model, independent predictors for 3+ ED visits were hospitalization for asthma in the past year, and current use of inhaled steroids and oral methylxanthines (table). Conclusion: In this multicenter study in Japan, frequent ED visits for acute asthma were much less common than in North America. Frequent utilizers of the ED for acute asthma have several markers of worse chronic asthma severity. ED-initiated interventions for patients with poorly controlled asthma (e.g., inhaled steroids, asthma education) merit further investigation in Japan. Background: As fiscal resources continue to shrink and ED overcrowding rises, communities need to pool resources to improve care of a region's patients. One approach is to identify and address the needs of frequent users of the ED who disproportionally utilize medical resources. Objectives: To evaluate patient characteristics and patterns of use of frequent users of ED resources. Methods: A retrospective multi-center cohort study of hospital ED visits from 324 non-military, acute care hospitals in California in 2010 using data submitted to the California Office of Statewide Health Planning and Development. Patients were classified into three ED utilization groups based on the number of ED visits in 2010: occasional users (1 to 5 visits), frequent users (6 to 20 visits), and super users (>20 visits). Demographics and patterns of use were described for the three patient populations, and differences in proportions and 95% CIs are reported. In this large statewide study, frequent users of acute emergency resources were responsible for a large and disproportionate share of ED visits. Super users were younger, more often had a primary diagnosis of pain, and visited multiple EDs compared with occasional and frequent users. Methods: This retrospective study explored the characteristics of frequent attenders who are defined as patients who had four or more visits at Singapore General Hospital between Jan 1 2010 and Dec 31 2010. Information collected included demographic characteristics, socioeconomic profile, and clinical information of each attendance. Results: A total of 105,616 patients attended the emergency department in 2010. Of the 875 patients with chronic kidney disease, 278 (31.7%) were frequent attenders and were responsible for 1881 visits. 52% of these patients were females with a mean age of 65 years old (SE 0.81%) and 64.4% were Chinese. 88.3% of the patients had at least four underlying co-morbidities and 87% of the patients had endstage renal failure. Patients presented with complaints such as blocked catheter, hypotension, and anemia. 57.9% of the visits were self referrals and 36% were referrals from dialysis centers. While 58% of the patients were triaged as priority 2, only 11% required surgical interventions. Conclusion: Frequent attenders contribute to overcrowding and increased workload at the emergency department. The most common complaints in patients with chronic kidney disease were issues with arterio-venous fistulas and prosthetic devices. A large number of visits were self referrals, indicating the need for more patient education with regards to the available primary care institution that would be more appropriate to manage their needs. Background: There is growing attention focused on so-called frequent users of emergency departments (EDs). Individuals who are homeless are frequently seen in the ED, particularly in urban settings, and often need both their medical and social needs addressed. Objectives: We sought to determine the effect of social services case management targeting homeless patients who visit local EDs most frequently in our community. We conducted a multi-center, pre/post study in two EDs (located in the urban core of a city of 3.1 million persons) with a combined census 61,000 examining ED visits for homeless patients enrolled in a unique community program providing social services case management, including housing resources (United Way Project25). Homeless individuals with high rates of visits to area EDs were approached by project staff to participate in the program. Data included ED and inpatient visits for 6 months before (pre) and after (post) enrollment. We compared health care utilization during both periods to determine the effect of the program on ED visits and inpatient admissions. The utilization difference between the preand post-intervention time periods was compared using a chi-square test. Differences in proportions and 95% confidence intervals are reported. Results: The social services case management program identified and enrolled 36 individuals who were frequent users of local EDs and were homeless. The number of participants who used ED services decreased from 28 (77.8%) to 23 (63.9%) from the pre-to postintervention periods, difference=13.9%; 95% CI=-7.0 to 33.3%; p=0.195. Overall ED visits decreased 53% from 115 to 54 visits preto post-intervention, or from 19.2 to 9 ED visits per enrollee over each six month period. There was a significant decrease in the proportion of participants with inpatient admissions from the pre-to post-intervention periods, 16 (44.4%) to 7 (19.4%), difference=25.0%; 95% CI=3.4% to 43.7%; p=0.023. Similar to overall ED visits, overall inpatient admissions decreased 68% from 25 to 8 admissions from the pre-to post-intervention periods, or 0.64 to 0.22 inpatient visits per enrollee over each six month period. Conclusion: In this limited study, a social services case management program focused on homeless frequent users of the ED significantly reduced ED visits and inpatient admissions for our target population. Ontario, London, ON, Canada; 10 University of Manitoba, Winnipeg, MB, Canada Background: Non-traumatic subarachnoid hemorrhage is often a life-threatening neurosurgical emergency. When a patient presents to the emergency department with sudden severe headache, traditional teaching is to perform a computed tomography scan of the brain and if negative, a lumbar puncture to analyze the cerebrospinal fluid in order to exclude subarachnoid hemorrhage. Objectives: To assess the cerebrospinal fluid of patients with nontraumatic headache to determine how to distinguish between traumatic tap and subarachnoid hemorrhage. Methods: A sub-study of a prospective multicenter cohort study. The study was conducted in 12 Canadian academic emergency departments from November 2000 to December 2009. Alert patients over 15 years of age with acute non-traumatic headaches who underwent lumbar punctures to rule out subarachnoid hemorrhage were included. Results: During our study there were 4,131 patients enrolled in the Ottawa SAH Rule Study. 1,739 patients underwent lumbar puncture and 678 (38.9%) had > 1x10 6 /L red blood cells in one of the tubes. There were 262 cases of subarachnoid hemorrhage in the entire cohort, 40 of which were diagnosed based on abnormal lumbar puncture results. The presence of less than 9,000 x10 6 /L red blood cells in addition to negative xanthochromia excluded the diagnosis of subarachnoid hemorrhage with a sensitivity of 92.5% (95% CI: 80.0-97.0); specificity of 97% (95% CI: 95.0-98.0) . The sensitivity of visual xanthochromia for subarachnoid hemorrhage was 62.2% (95% CI: 46.1-75.9) and the specificity was 100% (95% CI: 99.4-100). No cases of subarachnoid hemorrhage, missed by these criteria, were aneurismal. Two were managed conservatively and one was secondary to vertebral artery dissection. Conclusion: A combination of negative xanthochromia and red blood cell count < 9,000 x10 6 /L excludes the diagnosis of significant subarachnoid hemorrhage. Objectives: We studied the correlation of abnormal QEEG in acute stroke patients with initial normal head CTs and positive MRIs. Methods: A convenience sample of stroke patients presenting to an urban academic ED was recruited. Stroke was determined by history and physical exam. Brain QEEG electrical activity was collected using an investigational device (BrainScope tm ) EEG data were uploaded for later analysis. The probability of brain dysfunction was assessed using an independently developed quadratic discriminant function defining an index of brain electrical activity (IBEA.) All patients underwent head CT scan. Some patients with negative CTs had immediate MRI evaluation based on clinical needs. Correlation between CT, MRI, and IBEA was performed. Results: Seventy-eight patients were consented. Sixteen patients presenting with stroke symptoms had initial negative CTs, positive MRIs, and useable QEEG data. In 81% of these CT-negative, MRIpositive cases, the IBEA was positive. Discussion: QEEG is a rapid, noninvasive bedside test performed with a portable device. This technology might add additional diagnostic information for identification and earlier treatment of acute stroke. Currently there is no QEEG algorithm for identification of changes in cerebral electrical activity associated with stroke. The algorithm we employed to calculate the IBEA was originally derived in a separate trial to maximally identify patients who had suffered a traumatic brain injury. In that study, the algorithm correlated with a positive head CT scan with a sensitivity of 95% and NPV of 98%. Conclusion: In a preliminary trial, we have shown that a QEEG algorithm developed to calculate the IBEA for traumatic brain injury shows potential to also identify patients with CT-negative acute stroke. Further work is needed to refine this algorithm and apply it prospectively to series of stroke patients as confirmed by positive CT or MRI as well as age-matched controls. QEEG may have a potential role as an adjunctive diagnostic tool for evaluation of acute stroke. Thymosin b4 in the Treatment of Acute Stroke: A Dose Response Study Daniel C. Morris, Michael Chopp, Cui Yisheng, James Yang, Li Zhang, and Zheng G Zhang Henry Ford Health System, Detroit, MI Background: Thymosin b4 (Tb4) is a 5K peptide which influences cellular migration by inhibiting organization of the actin-cytoskeleton. Tb4 has neurorestorative properties and is a potential candidate for the treatment of acute stroke. Tb4 improves neurological outcome in a rat model of embolic stroke and research is now focused on optimizing its dose for clinical trials. Objectives: We hypothesized that Tb4 would dose-dependently improve neurological outcome in a rat model of embolic stroke. Methods: Male Wistar rats (n=40) were subjected to embolic middle cerebral artery occlusion (MCAo). Rats were divided into four groups of 10/group: control, 2, 12, and 18 mg/kg. Tb4 (Regenerx, Inc.) was administered intraperitoneally 24 hrs after MCAo and then every 3 days for four additional doses in a randomized controlled fashion. Personnel performing surgeries and neurological functional tests were blinded to the dose of Tb4. The adhesive-removal test (ART), foot fault test (FFT), and the modified Neurological Severity Score (mNSS) were performed prior to MCAo and at various times for 8 weeks. The rats were sacrificed 56 days after MCAo and lesion volumes measured. The global test using the Generalized Estimating Equation was used to compare the treatment effect on functional recovery. The quadratic response surface model was used to determine the optimal dose of Tb4 and t-test for lesion volumes. Results: Tb4 significantly improved neurological outcome at dosages of 2 and 12 mg/kg beginning at day 14 and until rats were sacrificed, p<0.05. The 2 mg/kg dose demonstrated a 24.2, 34.0, and 32.1% improvement in the ART, FFT, and mNSS, respectively, while the 12 mg/kg showed a similar improvement of 33, 31, and 26%. The higher dose of 18 mg/kg did not show significant improvement. Lesion volumes of the control, 2, 12, and 18 mg/kg groups were 33AE8.7%, 26AE10.1%, 24AE9.9%, and 33%AE12.6%, respectively, p>0.05. The figure demonstrates results of the mNSS. Statistical curve fitting determined that a dose of 3.75 mg/kg showed optimal neurological improvement. Conclusion: This dose-response study showed that neurological improvement was observed at 2 and 12 mg/kg doses with an optimal dose of 3.75 mg/kg. No improvement was observed at the 18 mg/kg level indicating a ceiling effect. Lesion volumes showed no differences, suggesting a neurorestorative effect. These results provide preclinical data for a phase I clinical trial. Objectives: To determine whether IV metoclopramide (MCP) was more effective than IV ketorolac (KTC) for 1) an acute primary headache that does not meet criteria for migraine or cluster headache, and 2) TTH. Methods: Adult ED patients who did not meet criteria for migraine or cluster were eligible for inclusion. Patients who met criteria for TTH were considered a sub-group of interest. MCP was combined with diphenhydramine (DPH) 25mg IV to prevent akathisia. Assignment was determined using a random number generator and known only to the pharmacist. All medications were clear solutions and appeared indistinguishable to the naked eye. The investigation medication was administered as a 15-minute IV infusion. The primary outcome was improvement in pain score, which was assessed on a verbal 0 to 10 scale at baseline and 60 minutes. Secondary outcomes included need for rescue medication, achieving headache freedom in the ED and sustaining it for 24 hours, and patient's desire to receive the same medication during the next ED visit. These are reported as number needed to treat (NNT) with 95%CI for MCP+DPH vs KTC. To account for two analyses of overlapping populations (all bland headaches and TTH alone), the alpha was set to 0.025. Results: 120 patients completed the protocol. Overall, the MCP group improved by 5.1 (SD 2.8) while the KTC group improved by 3.8 (SD 2.6), for a difference of 1.3 (97.5%CI 0.2, 2.5). Similarly, among the subgroup with TTH, those who received MCP improved 1.3 points more than the KTC group (97.5%CI: 0, 2.6). MCP outperformed KTC on all three secondary outcome measures: the NNT for rescue medication in the ED was 3 (95%CI: 2, 6), for achieving and maintaining a pain-free state was 6 (95%CI: 3, 20) , and for patient desire to receive the same medication again was 7 (95%CI: 4, 65), There were no serious or unexpected adverse events. Drowsiness at one hour was reported by 65% of the metoclopramide group and 36% of the ketorolac group (difference 29%, 95%CI 12, 47%). Objectives: To determine the rate of seizure cessation among the pediatric age group treated with IM midazolam versus intravenous (IV) lorazepam in the prehospital care setting. Methods: This was an exploratory analysis of the RAMPART multi-center trial that randomized patients who were diagnosed with SE to IM midazolam and IV placebo (IM group) or IM placebo and IV lorazepam (IV group) administered by paramedics. Included here were patient < 18 years. Evaluated were patient characteristics and time to important events among the two treatment groups (t-test, chisquare, Fisher's exact). The primary outcome was seizure cessation by ED arrival among subjects < 18 (and the subgroup < 11, n=96) years of age. Results: Of 893 RAMPART study subjects, 120 met criteria for this study (60 in each treatment group). There were no differences in important baseline characteristics or seizure etiologies between groups (table). The primary outcome was met in 41 (68.3%) and 43 (71.7%) of subjects in the IM and IV groups respectively (risk difference [RD]: -4%, 95% CI: -23.5, 15.5; p=0.69). Similar results were noted for those < 11 years (RD: -1.5%, 95% CI: -23.1%, 20.1%). The figure demonstrates that time from active drug administration to seizure cessation was shorter for those receiving IV lorazepam but the time from initiating the treatment protocol (opening the box) was shorter for those receiving IM midazolam. This was mainly due to the shorter time to actually administer the active IM treatment. Safety profiles were similar. This study suggests that IM midazolam can be rapidly administered and is safe and effective for the management of SE in the prehospital setting. The results are limited by sample size (noninferiority cannot be demonstrated) and the secondary analysis nature of the study. Objectives: We assessed the hypothesis that EMS use in tPAtreated stroke patients would be lower in rural areas compared to urban areas. We also examined important time intervals between groups. Methods: Prospective, observational study using previously collected data from 24 randomly selected Michigan community hospitals in the INSTINCT stroke trial. Hospitals were identified a priori as urban or rural using two models to account for varying rural definitions. Model 1 defined rural hospitals as those outside a Metropolitan Statistical Area (MSA). Model 2 used hospitals outside a major Urban Area (UA > 150 square miles). Descriptive statistics are presented; Student's t and chisquare tests were used in the comparisons. Results: All 557 patients treated with tPA for acute ischemic stroke from 2007 -2010 were included in the analysis. 82% (95% CI: 79%-85%) used EMS to access stroke care. Patients in both urban and rural groups had similar demographics. EMS transport times were significantly longer for rural patients in both models (Model 1 EMS dispatch to hospital time 41 minutes rural vs. 32 minutes urban, p value <0.001; Model 2, 36 minutes rural vs. 32 minutes urban, p value 0.006). Model 2, with a more restrictive geographic definition of a rural hospital, identified a significant reduction in EMS use in rural patients compared to the urban group (76% rural vs. 86% urban, p value 0.003). Conclusion: Overall EMS use among stroke patients receiving tPA was substantially higher than previously reported in the general stroke population. Lower EMS use in rural settings, however, was confirmed in the restrictive model. EMS transport times were longer in the rural setting, likely reflecting greater travel distances. EMS-level interventions to improve tPA delivery would reach a large majority of treated patients in both urban and rural settings. Objectives: To measure the effect of an evidence-based guideline (EBG) on the rates of diagnostic testing for children presenting to a pediatric emergency department (ED) with syncope by assessing diagnostic testing before and after implementation. Children's ED with syncope or pre-syncope were analyzed. Those children who were ill-appearing, had known cardiac or neurologic disease, significant comorbidities, known ingestion, or major trauma preceding syncope were excluded. Rates of diagnostic testing performed on children before (Jul 2010-Jun 2011) and after (Nov 2011-Oct 2012) EBG implementation were compared. The EBG advocated for obtaining an EKG on all patients and a urine pregnancy test on postpubertal females. The EBG recommended other evaluation (labwork, orthostatic vital signs, imaging, and specialty consultation) not be performed routinely unless clinically indicated. After the ED visit, electronic medical records were reviewed and phone follow-up was performed 2 months after the visit to measure further utilization and outcomes beyond the ED visit. Results: 168 patients were enrolled before and 176 patients after implementation of the EBG. Diagnostic testing rates increased for the recommended tests, and they decreased for all tests not recommended. CBC testing decreased from 37% (95% CI 30-44%) to 16% (12-23%) and electrolyte testing decreased from 30% (24-38) to 12% (8-18%). There were no missed cardiac events or deaths identified and the decrease in diagnostic testing did not lead families/patients to seek further care. Testing identified two patients who did not know they were pregnant. The implementation of an EBG for the ED evaluation of pediatric patients presenting with syncope was associated with decreased rates of unnecessary diagnostic testing. It was not associated with families seeking further testing or adverse events. Objectives: To describe variation in pediatric head CT rates among United States EDs and to determine hospital factors associated with lower scanning rates. Methods: This was a retrospective cohort study of United States ED visits using the Nationwide Emergency Department Sample database from 2006-2008. We evaluated patients less than 19 years old with ICD9-CM diagnosis codes of head trauma and used CPT codes to identify head CT scans. We calculated ED-specific head CT rates adjusting for patient-level confounders (age, sex, insurance status, and severity of injury as defined by ICD9-CM diagnosis codes for intracranial injury) and clustering of patients within EDs. We used Bayesian shrinkage estimation to address CT rate instability at lowervolume hospitals. To evaluate the effect of hospital-level factors on CT rates, we used negative binomial regression. Objectives: The objective of this study was to assess the test accuracy of standard AP pelvic radiography for identifying children with pelvic fractures after blunt torso trauma. We hypothesized that AP pelvic radiographs fail to identify all children with pelvic fractures, including an appreciable number of those with hypotension or who undergo operative intervention. Methods: We conducted a prospective multicenter observational study of children (<18 years) with blunt torso trauma in the Pediatric Emergency Care Applied Research Network (PECARN). We considered pelvic fractures present if a fracture to the pelvic bones (pubis, ilium, ischium, or sacrum) or pelvic joint dislocation was documented by the orthopedic faculty physician prior to ED/hospital discharge. We compared AP pelvic radiography to the final clinical diagnosis and described the data with descriptive statistics. Results: Of the 12,044 patients enrolled, 451 (3.7%, 95% CI 3.4, 4.1%) had pelvic fractures, 65 (14%, 95% CI 11%, 18%) of which underwent operative therapy. Among the 382 patients with pelvic fractures who underwent AP pelvic radiography in the ED, the radiographs had a sensitivity of 297/382 (78%, 95% CI 73, 82%) for patients with pelvic fractures/dislocations and 55/60 (92%, 95% CI 82, 97%) for patients undergoing operative therapy. Eighty-one of the 85 pelvic fractures not identified via AP radiography were identified by abdominal/pelvic CT scans (95%, 95% CI 88, 99%). Twenty-one (4.7%, 95% CI 2.9, 7.0%) of the 451 patients with pelvic fractures had age-adjusted hypotension on initial presentation and 17 had AP pelvic radiography performed. AP pelvic radiography identified fractures in 14 (82%, 95% CI 57, 96%) of these patients. Conclusion: AP pelvic radiography has a poor sensitivity for identifying children with pelvic fractures after blunt trauma, including a significant portion of patients with hypotension or who undergo operative therapy. AP pelvic radiography should not be relied upon as the sole diagnostic test in patients considered at high risk of pelvic fracture. Furthermore, AP pelvic radiography is unnecessary in patients when abdominal/pelvic CT scanning is otherwise planned. Network (PECARN) head injury criteria provide an algorithm to identify patients at very low risk for clinically important traumatic brain injury (ciTBI). We have previously described that application of the PECARN criteria would have decreased head CT utilization in a pre-verbal (age < 2) cohort by as much as 64.3% at our community hospital while maintaining 100% sensitivity for identification of ciTBI. The PECARN investigators noted that by implementing their algorithm, head CTs could potentially be avoided for 20% of verbal (age 2-18) pediatric patients. Objectives: Our objective was to quantify the number of pediatric heads CTs (age 2-18) performed at our community hospital that could have been avoided by utilizing the PECARN criteria. We conducted a retrospective chart review of all children between the age of 2 and 18 who presented to our community hospital and received head CT scans between Jan 1 st , 2010 and Dec 31 st , 2010. Utilizing a previously validated data extraction sheet, all pediatric patients (age 2-18) who had head CTs for trauma during the study period were included in the evaluation. Our primary outcome measure was the number of patients who were PECARN negative and received head CTs at our institution. Our secondary outcome was to reevaluate the sensitivity and specificity of the PECARN criteria to detect ciTBI in our cohort. Data were analyzed using descriptive statistics; 95% confidence intervals were calculated around proportions using the modified Wald method. Results: A total of 804 patients between the ages of 2-18 received head CTs at our institution during the study period. 238 patients were excluded as their head CTs were not ordered due to trauma. The prevalence of a ciTBI in our cohort was 4.1% (95% CI 2.7% -6.1%) ( Background: Increasing numbers of geriatric patients are presenting to emergency departments (ED), many of whom have low functional and cognitive status at discharge and are frequent ED visitors. Return ED visits may be a result of inadequate coordination of care or poor discharge planning. Brief ED assessments are needed to identify those patients more likely to return so that patient needs are met, preventing unnecessary ED visits and hospitalization. Objectives: The goal of this study was to examine whether a rapid screening assessment incorporating a newly developed global disability scale, age, polypharmacy and prior use of the ED in the previous 12 months accurately identifies older ED patients who will have 30-day ED revisits, hospital admissions, or death following an index ED visit. Methods: An expert panel identified four commonly available and consistently associated predictors of return visits (prior ED visits in the past 12 months, disability, polypharmacy, and age). Community-dwelling Medicare patients (65+y) presenting to the adult ED at the Yale-New Haven Hospital (n=250) were interviewed during their visits to assess these factors. Disability was assessed by a 12-item questionnaire, developed by our team from a list of over 400 disability items using Rasch analysis, measuring physical and cognitive disability, depression, stress, and isolation. Subsequent 30-day ED visits, hospital admissions, or death were assessed by medical record review. Multivariable logistic regression and receiver operating characteristic (ROC) curves were used to evaluate the ability to accurately predict the likelihood of a 30-day event. Results: 42 (17%) participants experienced at least one 30-day return visit or death. In the multivariable model, prior ED visits (OR=2.6, 95% CI=1.2,5.5), greater global disability (OR=1.56 , 95%CI=0.99,2.5), age (OR=1.04 , 95%CI=1.0,1.08), and polypharmacy greater than 10 medications (OR=1.8, 95%CI=0.9,3.9) were associated with a greater likelihood of a 30day event. The fit of the multivariable model was good (Hosmer-Lemeshow Goodness of Fit test, p=0.85) and it provided good discrimination between those having and not having 30-day events (AUCROC = 0.73). The predicted probabilities of a return visit ranged from 3% to 56%. Our assessment provides a rapid and accurate method for identifying older patients in the ED who are likely to recidivate. Americans will be older than 65. Geriatric patients are more likely than younger patients to be admitted to the hospital when they present to the ED and have higher readmission rates following hospitalization. Geriatric EDs have been proposed as a means to improve acute care for the elderly and reduce avoidable hospital admissions. Identifying trends in short-stay admissions among geriatric patients may inform the development and focus of geriatric EDs. Objectives: To evaluate trends in geriatric short-stay hospitalizations from 1990-2010 and identify differences between older and younger adults. Survey, a representative sample of US hospitalizations conducted by the National Center for Health Statistics. Trends were analyzed from 1990 to 2010. Short-stay was defined as less than 1, 2, or 3 days of hospitalization. Age ranges were 22-65, 65-74, 75-84, and ! 85 years. Changes in number of short stay admissions and proportion of total admissions that were short-stay were evaluated. Elective admissions and those resulting in death were excluded. Trends were evaluated using linear regression. Results: 3.9 million observations represented 512 million hospitalizations; 221 million (43%) were geriatric. Of these, 11%, 24%, and 39% were short-stay admissions when short-stay was defined as 1, 2, or 3 days. Between 1990 and 2010, short-stay admissions increased for each geriatric age group, both in number and as a proportion of total hospitalizations. These findings were most pronounced for patients 75-84 years old and 85 and older and were essentially absent for Conclusion: Short-stay admissions for all elderly increased between 1990 and 2010 both in number and as a percentage of total admissions. This remained true regardless of definition of short-stay. These significant trends were most pronounced in the extreme elderly, while little or no change was seen for adults <65. Future research is needed to better understand and optimize treatment for geriatric patients requiring acute care but only brief admission. Improved knowledge of the characteristics of this group and a more nuanced understanding of the unique needs of each age subgroup among geriatric patients is even more essential given the growing geriatric demographic. Is it the Volume or the Hospital? a National Look at ED Admission Rates for Geriatric Patients Scott M. Dresden, Emilie S. Powell, Rahul K. Khare, Amer Aldeen, and James G. Adams Northwestern University Feinberg School of Medicine, Chicago, IL Background: Geriatric EDs use a patient-centered approach to improve outcomes and decrease costs for elderly ED patients. One measure of decreased costs and improved outcomes is decreased admission rate. Objectives: Describe the current admission rate for elderly ED patients, the variability in geriatric admission rate by ED, and hospital factors correlated with an increased admission rate for elderly ED patients. We hypothesize that EDs with high geriatric patient volumes will have decreased admission rates. Emergency Department Sample, including 28.4 million ED visits at 980 EDs. Patients who died in the ED or were transferred to another hospital, and those treated in EDs with fewer than 100 geriatric visits, were excluded. Patient-level admission rates were calculated by age. Mean hospital-level admission rates for adult patients and geriatric patients (age>65) were calculated for each ED. Linear regression of hospital-level ED admission rate adjusted for hospital factors: ED adult volume, ED geriatric volume, location, region, teaching status, trauma designation, and safety-net status. Results: Patient-level admission rate increased with age: 5.4% at age 18 (95% CI 5.3%-5.5%), 32.9% at age 65 (95% CI 32.7%-33.1%), 50.5% at age 87 (95% CI: 50.2%-50.7%), 54.6% at age 96 (95% CI 53.8%-55.3%). Mean hospital-level admission rates were higher with wider variation for geriatric patients than non- Objectives: We hypothesized that there has been a greater increase in ED geriatric visits than in total ED visits since 2000. Methods: Retrospective cohort study. Setting: Ten New Jersey EDs located in suburban and urban areas. These include teaching and nonteaching hospitals with annual ED volumes from 30,000 to 85,000. Participants: Consecutive patients seen by ED physicians from 2000 through 2011. Protocol: We computed the number of visits for each calendar year all patients ! 65 and by sex in the following three age groups (years): 65-74, 75-84, and > 85. We then calculated the percent increase in visits for all patients ! 65, in each age group and for all visits by sex from 2000 to 2011. We tested for statistically differences using chi-square with alpha set at 0.05. 17 million visits to US EDs. Although EDs serve an essential role in the evaluation and treatment of acute injury and illness, ED care for older adults with exacerbations of chronic medical problems may be less efficient and more expensive than care by primary providers. North Carolina (NC) is a large state with an uneven distribution of health care resources. Objectives: We sought to describe the geographic variation in ED visits by older adults across NC. We hypothesized that variations in per capita ED visits by older adults would be explained by geographic, health, and sociodemographic characteristics. Census data were used to determine per capita ED visits by older adults for each zip code tabulation area (ZCTA). ZCTAs with fewer than 30 older adult residents were excluded. A multivariable regression model was developed using predictor variables thought to influence ED visits by older adults. Geographic-weighted regression was then used to analyze spatial variations in the relationship between predictor variables and per capita ED visits. Results: Analysis was performed on 770 of the 808 NC ZCTAs. The lowest quintile of ZCTAs had a median of 0.36 ED visits by older adults per older adult resident, while the highest quintile had a median of 0.79 visits (figure). After adjusting for population density, proportion of older adults reporting self-care disability, and proportion of nursing home beds, ZCTAs with a lower proportion of older adults who graduated from high school had higher per capita ED visits (p < 0.05). Median household income and percentage of non-white older adults were not independent predictors of per capita ED visit rates by older adults. Results of geographic weighted regression showed large residuals in the northeastern and western parts of the state, suggesting that variables not included in the model account for variance in ED visits in these locations. We observed large variations in per capita ED visits by older adults across NC in 2010 that are only partly explained by geographic, health, and sociodemographic characteristics. Further work is needed to directly assess the contribution of access to primary care to regional variations in per capita ED visits by older adults. Little is known about the quality of care transitions for older adults seen in the ED and discharged home. Objectives: To characterize the quality of care transitions for patients ! 65 years old seen in the ED and discharged home prior to the institution of a care transitions intervention. Methods: A convenience sample of older adults ! 65 years old who presented with an ESI > 1, knew their name, and were ambulatory with or without assistance prior to the presenting illness or injury were recruited from a large, urban ED beginning in January 2012. We administered a baseline interview and survey; for older adults discharged from the ED, we performed two-week follow-up telephone calls to administer the Care Transitions Measure-3 (CTM-3), a validated instrument designed to assess the quality of care transitions and predict ED revisits and readmissions. Results: Forty-seven patients have been enrolled who were discharged home, and 44 of the 47 (94%) were reached for follow-up and administered the CTM-3. The following number of patients agreed or strongly agreed with the following regarding their discharge plan from the ED: 1) My preferences were taken into account (N=35, 80%); 2) I understood my own responsibilities for managing my health (N=41; 93%); and 3) I understood the purpose of all medications (N=38; 86%). Conclusion: Preliminary data suggest that the majority of older adults discharged from the ED to home felt they had quality transitions in care. To further improve quality, care transitions for this group could focus on taking patient preferences into account and performing medication reconciliation. Methods: This is a retrospective observational cohort study of lowacuity ED HF patients managed in a high-volume urban teaching hospital over 12 consecutive months in 2011. The exposure of interest is the presence of SCPC HF exclusion criteria among patients managed in either the EDOU or as short-stay inpatients (SSIP), meaning admission for < 24 hours. Admission to either setting is at the discretion of the ED physician. EDOU patients were further subdivided into those discharged or admitted from the EDOU. All 30-day re-admissions to the index hospital were identified. Descriptive statistics and chi-square analyses were performed using Microsoft Excelâ. Results: Over the study period there were 119 SSIPS, with 16 excluded due to misclassification, and 56 EDOU patients. Although fewer SSIPs vs EDOU patients had 0 exclusion criteria (10% vs 30%; p value 0.001), the number of exclusions present per patient was otherwise statistically similar between groups: 1 (27% vs 27%), 2 (35% vs 25%), 3 (12% vs 11%), 4 (11% vs 5%), 5 (4% vs 2%), 6 (2% vs 0%). The only SCPC exclusion criterion that was statistically different between SSIPs and EDOU was percentage of patients with troponin > 0.07 (34% vs 18 %; p value 0.01). BNP > 1000 was the most common single exclusion criteria for both groups (51% and 48%, respectively). More admitted EDOU vs discharged EDOU patients had BNP > 1000 (70% vs 42%; p value 0.04). BNP levels tended to be higher for SSIPs, admitted EDOU patients, and patients who returned in 30 days versus than those discharged from the EDOU. Conclusion: In terms of SCPC heart failure eligibility, SSIPs were similar to EDOU patients and may represent an opportunity to manage additional HF patients as observation patients. Background: Informing ED patients of their risk estimate for adverse outcomes has been shown to safely reduce testing. The Canadian CT Head Rule (CCHR) reliably predicts clinically important brain injury and the need for neurological intervention, and has been validated to be 100% sensitive and more specific than other rules. The CCHR categorizes patients as medium-risk, high-risk, or "negative" (i.e., do not meet any criteria). Specific risk estimates for these groups are not known. Despite implementation of the CCHR, overuse of CT in minor head injury remains as high as 35%, thus increasing costs and unnecessary exposure to ionizing radiation. Objectives: To quantify risk estimates for clinically important brain injury, need for neurological intervention, or any brain injury in ED patients with minor head injury. Methods: A secondary analysis of subjects pooled from the prospective observational CCHR derivation (3,123 subjects) and validation (2,708 subjects) cohorts was performed on minor head injury patients from ten Canadian community and teaching hospital EDs. Proportion of cases meeting different combinations of the CCHR criteria and eventually found to have clinically important brain injury, need for neurological intervention, or any brain injury were to be reported using descriptive statistics. Results: The number of subjects meeting each combination of criteria and their corresponding risk estimates are provided in the table. Conclusion: Patients who are CCHR negative have very low risk estimates for clinically important brain injury and need for neurological intervention. This finding further corroborates that compliance with the CCHR should be a part of normal practice. Provision of a quantitative risk estimate to ED patients with minor head injury could be incorporated into formal patient decision aids to promote informed choices. Augmenting the CCHR with such a decision aid could help to safely reduce CT overuse. Examining Clinical Decision Support Integrity: Is Clinician Self-Reported Data Entry Accurate? Anurag Gupta, Ali Raja, and Ramin Khorasani Brigham and Women's Hospital, Boston, MA Background: While federal legislation has mandated the use of clinical decision support (CDS) systems, the accuracy of computerized provider order entry (CPOE) data in the ED is unknown, and inaccuracies may lead to erroneous CDS recommendations. Objectives: To determine the accuracy and downstream effects of clinician data entry dependent CDS designed to guide the appropriate use of CT for ED patients with suspected pulmonary embolus (PE). We use this case example because clinician-entered D-dimer results can be unambiguously compared to laboratory D-dimer results. We hypothesized that clinician-reported data entry was more than 90% accurate, and that inaccuracies leading to inappropriate recommendations were uncommon. Methods: This retrospective chart review study included all patients for whom clinicians ordered CTs for suspected PE between January 1, 2011, and December 31, 2011 in the ED of our 793-bed Level I trauma center. Emergency clinicians used the CPOE system to place orders for CTs for suspected PE and were required to input D-dimer value as well as individual Wells' criteria. We assessed the concordance between clinician data entry regarding, and actual laboratory assay results for, D-dimers. Chart reviews were performed by two ED attending physician abstractors, who discussed any cases for which their results differed. Results: Overall, 59,519 patients presented to the ED during the study period, 1,296 (2.2%) of whom had CTs ordered for suspected PE. Clinicians accurately entered D-dimer data for 1,175 (90.7%) of these patients. Forty (33.1%) of the 121 errors resulted in inappropriate CTs being performed, all of which were negative for PE. Twelve (9.9%) cases resulted in appropriate (elevated D-dimer) CT orders being canceled, of which 11 were later ruled out for PE and 1 was found to have PE by ventilation-perfusion scan. Fifty-five cases (45.5%) might be classified as "gaming" where clinicians indicated that the D-dimer was elevated or not ordered despite a laboratory-returned D-dimer that was normal, all of which were negative for PE. Conclusion: Consistent with our hypothesis, more than 90% of clinician data entry was concordant with laboratory D-dimer values. However, the 9.3% error rate may have been prevented with better integration between the CDS and EMR systems to minimize data entry. Methods: Retrospective derivation and prospective validation at two ED sites (suburban ED, tertiary university hospital ED). Records from a random selection of all "flank pain protocol" CTs in both EDs from 4/ 05-11/10 were reviewed. Exclusions were absence of flank or back pain, history of renal disease or urologic intervention, age <18y, or evidence of infection. Factors (n=113) from history, physical exam, and point-ofcare testing were abstracted blinded to CT diagnosis. CT reports were independently and blindly reviewed for symptomatic ureteral stone or acutely important alternate findings. In the derivation set, significant variables associated with ureteral stone in a multivariate logistic regression model were assigned points as in the Framingham study. An independent validation cohort was prospectively and consecutively enrolled from 5/11-11/12, blinded to the point system. Standard methods to assess calibration and discrimination of the rule were used. Results: Of 5,383 CTs, 1853 were selected for review and 1040 CT encounters had no exclusion criteria and were used to derive the rule; the validation cohort included 342 prospectively enrolled subjects. The derivation and validation sets had ureteral stones in 55% and 56% and acutely important alternate findings in 2.9% and 3.8%, respectively, with otherwise similar demographics. Predictors of ureteral stone are shown with associated points in the table. Accuracy of categorization divided into low, moderate, and high risk groups for both sets is shown in the figure. For the validation cohort, the AUC was 0.80 (95% CI 0.76 to 0.84), Hosmer-Lemeshow chi-square = 0.98 was not significant (p = 0.61), indicating good discrimination and calibration. Acutely important alternate findings were present in 0.3% (n=1) and 1.6% (n=2) of the "high" probability stone groups in derivation and validation sets, respectively. Background: Although previous research on kiosk technology has shown its effectiveness and safety to screen emergency department (ED) patients for intimate partner violence and for general health counseling, no study has focused on whether hospitals are interested in installing kiosks for non-research purposes. Objectives: This project aims to assess ED directors' willingness and capacity to adopt computer kiosks in ED waiting rooms for use as anonymous general health screening tools that can provide tailored health care information to users. Methods: A list of approximately 4000 EDs was obtained from the American Hospital Directory (AHD) list of hospitals. A random sample of 400 hospitals with emergency departments in the U.S. was then chosen from this list. Using a qualitative and quantitative 10-minute phone survey that had been previously piloted, trained research staff interviewed medical and unit directors in EDs across the country about their interest in anonymous kiosk screening for sensitive health and behavioral issues. The results were then analyzed. Results: 131 hospital administrators answered the survey (33% response rate). Approximately 70% of participants were interested in the technology. 79% of participants indicated that they would implement it if the system was free, but only 39% were interested in purchasing it. Medical directors, compared to unit directors, were more interested in the kiosks, either if they were free (OR=6.3) or if installation was the only cost (OR=2.6). When the responses were compared by location (urban and non-urban), urban hospitals were more likely to indicate interest in implementing the kiosks (OR=2.1). Non-urban and urban hospitals were approximately the same (38% and 40%, respectively) in their interest in implementing if they had to pay to install the technology. On the qualitative portion, many of the participants noted that they did not see the benefit of the information, the usefulness of screening, or the value added by having the kiosk. Conclusion: The majority of administrators are interested in the kiosks, but few are willing to pay to implement the technology. If unstaffed kiosks are to be implemented on a wide scale in hospital EDs as a screening tool, administrators, especially unit directors, need to see the value. Additionally, the kiosks would need to have a low cost for hospitals to implement. Medication Adherence Emerges as a Strong Target for mHealth Interventions in Qualitative Analysis of Text-med (Trial to Examine Text-based Mhealth for Emergency Department Patients with Diabetes) Elizabeth Burner, Michael Menchine, and Sanjay Arora University of Southern California, Los Angeles, CA Background: Mobile Health (mHealth) is an emerging field that uses a patient's cell phone to improve health knowledge and behavior. Consistent behavior change is difficult to achieve in patients with diabetes, particularly in low-income, resource-poor populations. The optimal design of mHealth interventions and the health behaviors that are most affected by mHealth are as of yet undetermined. Objectives: To explore the most important aspects of an mHealth intervention for ED patients with diabetes (TExT-MED) via a qualitative analysis of focus groups. Methods: We conducted five focus groups with a total of 24 participants who had received a 6-month text message based comprehensive diabetes intervention. Focus groups were stratified by language and sex. We imported verbatim transcripts into a computerized qualitative analysis program, and a rigorous text-base coding system was used. We analyzed the transcripts in iterative process, reexamining the earlier transcripts with the new codes derived from each round of analysis until saturation was reached. Broad categorical themes arose from the initial codes and were developed into a paradigm of mHealth effect on disease management. Results: Medication adherence was the most frequently coded subtheme, and the most frequently coded benefit from the TExT-MED program was improved medication adherence. Patients knew medications were critical to glycemic control and avoiding complications, but were unable to consistently take them due to cost, side-effects, preference for natural remedies, confusion about medication regimens, and sheer forgetfulness. Forgetfulness was the most recurrent theme across sexes and languages, and TExT-MED helped with this. Example quotes: I know I have to take care of myself. But if I don't have everything exactly-I don't remember when-did I take my medicine? Did I not? Oh, is this for morning? Is this for night? I go, 'oh I missed my medicine' but I don't do that no more. 'Cause then I'll get another TExT-MED message and then it'll say, 'don't forget your medicines,' and stuff like-oh my medicine. And I'll remember… But it works. I mean, to me, it's a good thing. 'Cause it doesn't bug me and it does remind me and I gotta accept the fact that I got it. Conclusion: Medication adherence was noted by these patients to be the most prominent obstacle and appears to be the most amenable to mHealth interventions. Background: There are numerous potential applications of mobile phone technology for ED patients post-visit, e.g., following up on medication adherence, referrals to community resources, and other preventive interventions. Moreover, there are potential applications of such technology within the ED setting, e.g., presenting real-time medical information to patients on their progress, care plan, and care team throughout their ED stay. However, it is unclear what the market potential is for 'going to scale' with innovative uses of mobile phone technology with ED patients. Previous studies have suffered from a number of methodological limitations, e.g., small sample sizes, singlesite studies, and relative over-enrollment of youth. Objectives: To characterize mobile phone ownership and usage among ED patients. Methods: SETTING -Three urban, high-volume EDs (Connecticut, USA) at Yale-New Haven Hospital (n=1,922), Bridgeport Hospital (1,900), and Hospital of St. Raphael (1, 966) . PARTICIPANTS -Consecutive patients who met the inclusion criterion (able to consent and presenting condition did not preclude interviewing) were interviewed 24 hours per day, seven days per week. Excluded patients included those who did not speak English or Spanish, presenting condition precluded interview, medical procedures, or the patient was unable or unwilling to consent. Overall, 89% of eligible patients consented to participate in this study. OBSERVATIONS -Patients interviewed over the course of six weeks in July and August 2012 by trained research assistants were to be characterized using descriptive statistics. Results: Mean age was 46 years (SD 20); 58% were female; 62% were white, 24% were Hispanic; 15% reported no-to-some schooling; and 39% reported income less than $15,000. Baseline ownership of cell phones was high (85%); and usage for calling (99%) and text messaging (73%) was high, while usage for internet (46%), social networking (39%), and gaming (29%) was moderate. Conclusion: ED patients reported higher ownership of mobile phones and smart phones than the general population. intubation, and very little on mechanical ventilation. Previous data have shown that complex critical care interventions can be difficult to implement in the ED setting. Acute lung injury (ALI) is a common form of hypoxemic respiratory failure that develops soon after admission from the ED. There is increased interest in preventing this syndrome, yet little research has focused on the role the ED may play in this arena. Assessing the willingness of emergency physicians to adopt ALI prevention strategies is necessary before planning clinical trials. Objectives: To quantify the proportion of academic EDs that initiate prolonged mechanical ventilation in the ED, and to assess the willingness of academic emergency physicians to adopt an ALI prevention strategy. We hypothesized that despite a lack of EM-based literature to guide therapy, emergency physicians would be responsible for the management of mechanically ventilated patients for prolonged periods, and would be willing to adopt a simple ALI prevention strategy if literature supported it. Methods: An electronic mail survey was sent to academic attending emergency physicians randomly selected from each state with EM residency programs (n= 43 centers). Descriptive statistics were used to describe the responses. Results: The initiation of mechanical ventilation was common (94.6%). Intubated patients routinely received mechanical ventilation for several hours (73%). Emergency physicians were responsible for the management of mechanical ventilation prior to ICU transfer (100%), despite a lack of literature to guide ED-based mechanical ventilation (78.4%). Emergency physicians also cited willingness to adopt an intervention that could decrease the incidence of ALI (100%). Conclusion: Prolonged mechanical ventilation in the ED is common. Despite a lack of ED-based literature to guide therapy, emergency physicians are responsible for this aspect of care. Emergency physicians seem willing to adopt a ventilation strategy that could decrease the incidence of ALI. With increasing use of the ED for critically ill mechanically ventilated patients, and an increasing burden of ALI, EDbased mechanical ventilation research is needed. It is possible that the paradigm of upstream interventions for other critical care syndromes carrying a high mortality (e.g. sepsis) may apply to ALI as well. Background: Intubation in critically ill patients in the emergency department (ED) is particularly dangerous in the setting of hypotension. During this peri-intubation period, hypotension increases the risk of cardiovascular collapse and other life threatening complications. Peripherally given diluted "push-dose" phenylephrine (PE) has been advocated to decrease hypotension during this period; however, the efficacy of this treatment is unclear. Objectives: To investigate the efficacy and usage of bolus-dose phenylephrine for peri-intubation hypotension (PIH) at an urban academic medical center. Methods: This was a retrospective chart review of all patients intubated between February 1, 2011 and February 1, 2012. All adult intubated hypotensive patients (SBP <90 within 60 minutes of intubation) were eligible for analysis. Patient's given PE were compared pre-and post-treatment on the following variables: systolic blood pressure (SBP), diastolic blood pressure (DBP), and heart rate (HR) using the Student's t-test. Results: A total of 444 patients were intubated during the study period, of whom 120 were hypotensive and met eligibility criteria. Thirty patients received PE, 17 of them during the peri-intubation period. The rest were given PE sporadically and with the use of other vasopressors. During the peri-intubation period, PE treatment increased SBP (pre: 77 mmHg, 95% CI 72-82; post: 97 mmHg, 95% CI 86-108) and DBP (pre: 45 mmHg, 95%CI 40-51; post: 51 mmHg, 95% CI 45-56) within 30 minutes of treatment (p<0.01). There was no significant change in HR. In this academic ED, bolus-phenylephrine for hypotension was used sporadically without a clear systematic practice pattern. When PE was used during the peri-intubation period, treatment was associated with improvement in both SBP and DBP without affecting heart rate. Background: CT imaging is a commonly used diagnostic test for critically ill patients, but exposes patients to ionizing radiation, requires transport out of the ED, and can be time-consuming. Utilization of bedside ultrasound (US) in the evaluation of critically ill patients with undifferentiated hypotension can expedite assessment and has the potential to improve emergency physicians' diagnostic certainty. Objectives: To measure the effect of using bedside US on emergency physicians' plan to utilize CT imaging for hypotensive patients. We hypothesized that CT utilization would increase after results of US protocol. Methods: This is a sub-analysis of a prospective cohort study quantifying the effect of an US hypotension protocol (aorta, FAST, caval, cardiac, pneumothorax) on ED attending physicians' diagnostic uncertainty and resource utilization. That study enrolled a convenience sample of patients presenting with non-traumatic hypotension at an urban academic ED. Data collection included a probability-weighted differential diagnosis, critical therapies, and planned diagnostic testing including CT imaging. Physicians were surveyed immediately after initial evaluation, and surveyed again after the hypotension protocol was performed by an expert ED sonographer. Data included any change in plan to order CT scans after seeing the US results. Patients' inpatient courses were also recorded, including inpatient performance of CT imaging and ultimate diagnosis. Results: Of the 118 patients enrolled, ED physicians intended to perform CT in 47 (39.8%), and 18 (39.1%) yielded significant pathology. There was no significant change in the total number of planned CT scans before and after performing the US protocol; however, a change in the type of CT scans was observed in 26 of cases (22%), both from abdominal to chest, and vice versa. In 13 cases, an initially planned CT scan was cancelled as a result of the US; subsequent inpatient chart review showed that the cancelled scan did not lead to missed findings or adverse events. In 13 cases (11%), newly planned CT scans identified significant pathology. Conclusion: A bedside US protocol for patients with undifferentiated hypotension is not associated with an increase in CT utilization. The protocol is associated with a shift toward ordering CTs that are more likely to reveal pathology. Objectives: To determine how frequently inappropriate TVs are set for patients intubated in the ED, and whether body mass index (BMI) is associated with inappropriately set TV. We hypothesized that many patients have inappropriately set TVs and that patients with higher BMI are more likely to have inappropriately set TVs than those with normal BMI. Methods: This observational, retrospective study used an existing database of all patients intubated in the ED of a large, urban, tertiarycare, teaching hospital from 11/14/09 to 6/1/11 compiled by trained research assistants blinded to the objectives. Patient age, sex, weight, height, and set TV were recorded. Ideal body weight (IBW) and BMI were calculated for each subject. Each subject was categorized as underweight, normal, overweight, moderately obese, or severely obese based on the CDC classification of BMI (less than 17.5, 17.5-25, 25.1-30, 30.1-40, and greater than 40, respectively). Patients were assigned a binomial value for TV appropriateness (appropriate, 6-10cc/kg IBW; or inappropriate, less than 6 or greater than 10 cc/kg IBW). Logistic regression was performed to examine the relationship between TV appropriateness and BMI after controlling for potential confounders. Background: Different signs of physiologic instability are used clinically to identify patients in shock or who may progress to shock (pre-shock). We propose the following criteria to define physiologic instability: heart rate (HR) > 120, systolic blood pressure (SBP) < 90, shock index (SI) (HR/SBP) > 1 for 5 minutes, or lactate > 4. Objectives: To evaluate the epidemiology and rate of clinical deterioration among emergency department (ED) patients identified as having physiologic instability (pre-shock and shock). Methods: Ongoing prospective, observational study of consecutive adult (age > 18) patients presenting to a 55,000 visit urban tertiary-care ED with signs of physiologic instability from 11/1 -11/14, 2012. Exclusions: isolated atrial tachycardia, seizure, psychiatric agitation, or tachycardia due to traumatic pain. A continuous ED computer surveillance system identified patients. We confirmed eligibility, determined etiology of instability, and identified clinical deterioration through ED and in-hospital chart review. We defined clinical deterioration as acute renal failure (ARF) (2X increase in creatinine), non-elective intubation, initiation of vasopressors, and in-hospital death. We stratified patients: "ED shock" defined as persistent hypotension despite 2L IV fluids or vasopressor use in the ED; and, "pre-shock" for non-hypotensive instability. Point estimates are reported with 95% confidence intervals. We identified 222 patients, excluded 71, leaving 113 preshock and 38 shock patients; 78% were admitted. The underlying causes of instability were: 46% (38-54%) septic, 9.3% (4.7-14%) cardiogenic, 6.6% (2.6-10%) hemorrhagic, 13% (7.8-19%) hypovolemic, 1.3% (0-3.1%), anaphylactic, 0.7% (0-2.0%) neurogenic, and 23% (17-30%) other. Clinical deterioration occurred in 17/113 (15%; 8-22%) patients with pre-shock (7.1% ARF, 7.1% intubated, 1.8% vasopressors, 5.3% died) and 27/38 (71%; 57-86%) shock patients (29% ARF, 29% intubated, 74% vasopressors,11% died). ICU admission rate was 37% in pre-shock and 76% in shock. Objectives: To determine the predictive value of a palliative care screening test for complex ICU admissions. We investigate the applicability of a palliative care consult trigger to a population of MICU admissions to assess its potential utility in the ICU and the ED, the source of most ICU admissions. Methods: MICU admissions were screened at a single hospital to assess the prevalence of test-positive cases, and whether those with positive tests had longer hospital lengths of stay (LOS). Data were collected in the context of this CQI project, and patients who screened positive were compared to those who screened negative for the LOS outcomes. Patients screened positive if they met one or more of the following six criteria: admitted from nursing home, end-stage dementia, large IC bleed, metastatic cancer, post cardiac arrest, ICU readmission within 30 days. Results: During the four week study period, 67 patients were screened by the MICU nursing staff. Those who had one risk factor were no different in ICU LOS or hospital LOS. Those with two or more risk factors had a significantly longer ICU and hospital LOS (7.1 vs. 3.3; 15.9 vs. 7.0 days, respectively) r=0.36, p=0.003. Conclusion: Screening of ICU admissions can be easily achieved using a six-item score, suitable for performance in the ICU or ED. Patients with two or more palliative care risk factors had longer ICU and hospital LOS and may potentially benefit from palliative care consultations. Background: A number of mechanisms have been postulated to explain gender-specific differences in both the manifestations and incidence of anaphylaxis. However, no large-scale, populationbased studies have examined the gender-specific incidence, recurrence, and management of anaphylaxis in the emergency department (ED). Objectives: The purpose of our study is to explore gender-based variations in anaphylaxis, observed in a population of over one million urban inhabitants over five years. We hypothesized that there would be a number of gender-based differences in regards to prevalence, severity, etiology, treatment, as well as admission, length of stay, and recurrence. We conducted a retrospective cohort study in Calgary, Alberta. We relied on data drawn from administrative and electronic health record-derived sources and compiled between 2007 and 2011 from three tertiary care EDs. Inclusion criteria included age greater than 18, presentation to Calgary area EDs, and a primary diagnosis of anaphylaxis determined by International Classification of Diseases, Tenth Revision (ICD-10) and coded by a professional nosologist. Chi-square, Fisher's exact, and Mann-Whitney U tests were used to test the differences. Results: Anaphylaxis was diagnosed in 1260 patients who presented to the ED over this period, of whom 8% were in the most acute triage category and 3% were admitted to hospital. Women were more likely than men to present with anaphylaxis (55.4% vs, 44.6%; p < 0.01) and experienced longer delays to receipt of first dose of epinephrine (37 min vs 28 min; p < 0.05). Men demonstrated higher acuity presentations than women (85.2% vs 81%; p < 0.05) and more peanut-specific allergic reactions (22.2% vs 16.5%; p = 0.01). There were no gender-specific differences found for age of presentation, mode of arrival, medical management, length of stay, or admission or recurrence rates. Our results confirm that anaphylaxis is more common in women than men. However, our results challenge the currently held belief that women have higher recurrence rates and that there is no difference in severity between men and women. Future research is needed to help better understand these differences. with increased survival to hospital discharge, yet specific data guiding CC depth targets in children are lacking. Current guidelines for pediatric CCs are based on extrapolation from adults, animal models, and expert consensus. Given this lack of data, investigation into the biomechanical effect of CCs on the pediatric thorax is warranted. Objectives: To visualize the deformation of the pediatric chest during CCs. Methods: Three non-injurious CC simulations were performed on a single 7 year-old post-mortem human subject with intended compression depth of 0%, 15%, and 25% of external thoracic depth. A Phillips CPR Force Deflection Sensor replica was positioned at the inter-nipple line of the subject. Force was statically applied to reach the desired compression depth. Non-contrast-enhanced CT imaging was obtained for each compression level (see figure) . External thoracic depths were measured at the inter-nipple level from the most anterior to most posterior skin surface at midline. Internal depths were measured from the posterior sternal border to the anterior border of the corresponding vertebral body. Left ventricular volumes were calculated using three-dimensional multiplanar reconstruction and a four-level modified Simpson method. Results: Intended 25% compression resulted in change of 25 mm in external chest depth and 20 mm of internal depth (a 16.4% and 29.6% change, respectively). Total left ventricular volume (inclusive of myocardium and ventricular cavity) was reduced by 8.8% (62.7 cm 3 to 57.2 cm 3 ). Right ventricular volumes were not measured due to difficulty in right heart border visualization, but subjective deformation of the right heart was greater than left. Conclusion: This is the first reported CT visualization of thoracic contents during compression of the pediatric chest. Our results indicate a proportionally greater decrease in internal thoracic depth than external depth for a given compression depth. Previous studies extrapolating the desired compression depth from CT data have assumed internal compression directly proportional to external compression depth. These studies may overestimate the target depth necessary to achieve a given amount of internal thoracic compression. The device is a non-invasive method that measures cardiac output by continuous wave Doppler ultrasound (Uscom Ltd, Sydney, Australia). This device allows access to real-time cardiac output data which was previously not feasible. Bedside cardiac ultrasound (BCU) is performed routinely in ED patients with undifferentiated shock to help guide resuscitation. Objectives: We hypothesize the USCOM device will provide information during resuscitation. Methods: This was a prospective observational study of subjects presenting to the ED of an urban Level I trauma center in shock, as defined by shock index > 0.9, who were going to undergo BCU. Trained research associates (RAs) screened the ED for eligible subjects. In addition to performing BCU, the enrolling physician also used the USCOM device to measure cardiac output, systemic vascular resistance, stroke volume, and cardiac index. At least two BCU and USCOM readings were obtained. Physicians were asked to record the type of shock based on the measurements and their clinical impression. The kappa statistic was used to assess for agreement between the type of shock diagnosed by BCU and USCOM and clinical impression and to assess for agreement between the discharge diagnosis and the ED diagnosis. Descriptive statistics were used as appropriate. Results: Between August and November 2012, 18 subjects were enrolled. Fifteen were included for analysis (median age 58, range 36-86), with 3 excluded due to repeat subject enrollment and incomplete data. Median shock index of enrolled patients was 1.02, range 0.65-1.84, median SBP was 69, range 61-145, and 67% (10/15) were female. There was moderate agreement between the type of shock between BCU and USCOM (k = 0.56, 95% CI 0.29, 0.83). There was moderate agreement between BCU and ED clinical impression (k = 0.46, 95% CI 0.17, 0.76) and good agreement between USCOM and ED clinical impression (k = 0.64, 95% 0.34, 0.93). There was poor agreement between the final discharge diagnosis and ED clinical impression (k = 0.051, 95% CI -0.22, 0.33). Conclusion: BCU and USCOM had similar findings regarding type of shock. Preliminary findings indicate a feasible role for the USCOM device in the resuscitation of critically ill ED patients. Katherine Berg, Amanda Graver, Justin Salciccioli, Tyler Giberson, Shiva Gautam, and Michael Donnino Beth Israel Deaconess Medical Center, Boston, MA Background: VO2 is determined by oxygen delivery and the ability of cells to extract that oxygen from the blood. Previous investigators have tried to find ways to increase VO2 with the goal of improving tissue oxygenation and therefore outcome. No effective intervention for increasing the extraction component of VO2 has yet been described. Objectives: We hypothesized that intravenous thiamine would increase oxygen consumption (VO2) in critically ill patients. We performed a pilot, open-label, interventional study of critically ill, mechanically ventilated adult patients in medical and surgical intensive care units at an urban tertiary care center between 10/2011 and 6/2012. Exclusion criteria were FiO2>60%, temperature>100.0F, thiamine use within the past two weeks, and presence of a chest tube or other source of air leak. We recorded VO2 continuously before and after administration of 200 mg of intravenous thiamine. The primary outcome was the change in VO2. We also measured plasma thiamine levels before and after thiamine administration. We used simple descriptive statistics to describe the study population and linear mixed modeling to evaluate the change in VO2 in patients after thiamine administration. Results: Twenty patients were enrolled. Four were excluded due to incomplete data, leaving 16 to be analyzed. There was a statistically significant increase in VO2 after thiamine (average increase of 16 ml/ min + SE 8, p= 0.049). These results remained the same after adjusting for changes in cardiac index (CI). In patients with an average CI greater than our cohort's median value of 2.9 L/m 2 , there was an increase in VO2 of 71 ml/min (+ 16, p<0.0001) after thiamine administration. Thiamine had no effect on VO2 in patients with reduced CI (<2.5 L/m 2 ). Thiamine levels ranged from undetectable in one patient to 73 nmol/L (reference range 9-44 nmol.L). There was no association between initial thiamine level and change in VO2 after thiamine administration. Conclusion: We found that thiamine administration increases VO2 in critically ill patients. There was a larger effect in patients with higher than average CI. Do Methods: This was a prospective survey of actively practicing EPs across the country. 750 surveys were distributed to a convenience sample of EPs across the US using Survey Monkeyâ. Subjects were included if they completed any part of the vignette portion of the questionnaire. The questionnaire included six clinical cases designed from actual inpatients with active DNR orders who were interviewed at one institution. All cases were made into scenarios of patients presenting to the ED followed by 13 standardized yes/no questions asking EPs what treatments and interventions they would provide each patient both if the patient had a valid DNR order and if the same patient did not have a DNR order. These interventions included: chest compressions, vasopressors, antiarrhythmics, ICU admit, central line, blood transfusion, invasive ventilation (ETT) for respiratory distress, ETT for respiratory arrest, noninvasive ventilation (NIPV), defibrillation, cardioversion, cardiac catheterization, and surgical procedures. The primary outcome measure was to determine which treatments or interventions were withheld based on the DNR status. A paired t-test and mean percent difference was used to analyze the differences between groups. Results: Out of 750 surveys distributed, 230 (30%) physicians responded from 17 states. 65.8% were male, 18% were residents, 75.5% were board certified, and mean length of practice was 13.5 years. For each of the 13 interventions, there was significant (p<0.05) effect on whether the procedure was performed based on DNR status. The largest mean percent differences between those having and lacking DNR orders were for: chest compressions (93%), ETT in arrest (88%), defibrillation (86%), and ETT in near arrest (75%). The interventions most likely to be performed regardless of DNR status based on mean percent difference included: NIPV (18%), blood transfusion (25%), antiarrhythmic medications (25%), central line placement (29%), and ICU admission (30%). Objectives: To estimate the effect of prehospital intubation duration on development of early ( 7 days) VAP. Methods: Single-center trauma registry cohort study of all patients (n = 544) presenting to a Level I trauma center between January 2005 and December 2011 and remaining intubated for 3 days. Demographic and injury data were abstracted from trauma registry records, and patients were linked with hospital-acquired infection epidemiology data. Results: Fifty (9.1%) trauma patients were diagnosed with VAP within 7 days of admission during the study period, and median time from intubation to ICU admission was 165 minutes. The mean injury severity score was 30.2 (SD 12.2) . Variables associated with VAP development included age, injury severity score, implementation of an inpatient VAP prevention bundle, and duration of intubation. Using multivariable logistic regression to adjust for these covariates, the duration of time intubated prior to ICU admission was not associated with subsequent development of early ( 7 days Objectives: The primary objective was to describe the relationship between serial modified cross products (CP), serial modified shock indices (SI), and outcomes in PCAS patients treated with TH and bundled care. Methods: Using a retrospective chart review, we evaluated PCAS patients admitted to our health system and treated with bundled postarrest care including TH and MAP optimization, between May, 2005 and October, 2010. We analyzed modified CP (MAP x HR) and modified SI (HR/MAP) at 1 hr, 6 hr, and 12 hr post-arrest in relation to survival to hospital discharge. We controlled for potential confounders including initial rhythm, length of arrest, lactate level, and age. Objectives: We sought to determine whether changes in mixed venous oxygen saturation (SvO2) were reflected in the retinal venous oxygen saturation (SrvO2) during progressive hypoxia and recovery from hypoxic insult. Methods: One domestic swine weighing 53 kg was anesthetized, intubated, and kept under general anesthesia with isoflurane. Arterial and venous sheaths were placed via femoral cutdown, and a Swan-Ganz catheter was placed into the pulmonary artery. The placement of all lines was confirmed via waveform. The eye was sutured open to facilitate oximetry measurements and was kept hydrated with a balanced salt solution which was instilled throughout the experiment. All retinal oximetry measurements were performed using the ROx-3 retinal oximeter, a scanning laser ophthalmoscope, using the blue-green minima technique. Mixed venous blood saturation was measured by Swan-Ganz catheter. The animal was given a high-nitrogen, low-oxygen gas mixture and measurements were recorded and retinal images recorded at intervals determined by changes in arterial oxygenation as measured by pulse oximeter (SpO2). When SpO2 reached 60% or hemodynamic instability occurred the animal was placed on 100% oxygen and allowed to recover fully, with additional measurements recorded during re-oxygenation. The animal was euthanized at the end of the experimental period. Results: Four successive hypoxic insults were performed for a total of 187 measurements, with SpO2 ranging from 98% to 60%, SvO2 from 73% to 18%, and SrvO2 93% to 21%. Linear regression demonstrated an association between SrvO2 and SvO2 for each insult. Bland-Altman analysis revealed mean difference (SD) of -3.64 (16.7), 6.29 (14.2), -6.09 (17.8), and -5.99 (14.9) between SrvO2 and SvO2 for each hypoxic insult. Objectives: To compare the patency of radial artery catheters and the accuracy of the pressure wave maintained with heparinized and non-heparinized infusions. Methods: In an emergency room and mixed intensive care unit of a tertiary level hospital, patients were consecutively entered into the study and randomly assigned to received arterial flush solutions containing 2 U/ml of heparin in 0.9% sodium chloride (group H, n=9) or nonheparinized saline (group NS, n=11). The flow rate of each flush solution was approximately 2 ml/hr. The functional life span of radial arterial catheters, the variation between the arterial and brachial cuff pressure, and the quality of the pressure waveforms were observed. Results: The mean duration of cannulation was not significantly different between group H and group NS (114 and 94 hours, respectively, p=0.64). The variation between blood pressure as measured by arterial catheters and brachial cuffs was also not significantly different between the groups (p=0.73). Kaplan-Meier analysis showed no differences in the incidence of pressure wave dampening between groups (log-rank test, p=0.11). Conclusion: The use of heparinized flush solutions for arterial catheter does not appear to be justified. There is no significant difference between heparinized and non-heparinized flush solutions for the maintenance of radial artery catheter patency and functions. Background: Over 380,000 people suffer from cardiac arrest in the United States annually, and survival depends largely on high quality CPR, which can double or even triple survival chances. The effect of cardiac arrest requires that all members in the health care field be experts in this vital skill. Learning CPR in medical school provides students with early exposure, with the ultimate goal of improving their long-term competency. Unfortunately skill performance throughout all members of the field is quite poor, along with a high rate of decrement after initial training. American Heart Association (AHA) guidelines advise that recertification of skills should be done every two years; however, studies show that refresher courses should be administered more frequently. Objectives: To evaluate pre-clinical medical students' performance in CPR according to 2010 resuscitation guidelines, specifically to ascertain what type of refresher leads to the greatest improvement in performance. Methods: Throughout two weeks in July 2012, a randomized sample of second-year medical students was categorized by when they last participated in an AHA Basic Life Support (BLS) CPR course. Group 1 (n=41) completed the course less than two weeks prior to assessment. The other students completed the course an average of 11 months prior, and they were randomly assigned into one of three groups preceding assessment: Group 2 (n=45) received no instructions, Group 3 (n=48) received a video refresher, and Group 4 (n=46) received the same video refresher with an additional seven minutes of hands-on practice time. Each student was then individually assessed on five cycles (two minutes) of BLS CPR performed on a Laerdal Resusci-Anne recording mannequin, and the quality of CPR was evaluated. Results: Students in Group 4 were more likely to provide highquality chest compressions (ANOVA Group 4v2,3 p=0.015) and bagvalve mask ventilations (ANOVA Group 4v1,2,3 p=0.028). Further descriptive and inferential data was also collected. Conclusion: A short refresher with hands-on practice was shown to increase competency in CPR to levels similar to students who had just recently completed a CPR course. Brief hands-on refresher courses should be implemented more frequently for adequate retention of practical skills, but how to structure the most efficient refresher course for health care providers requires further research. Methods: Four domestic swine weighing 50-60 kg were anesthetized, intubated, and ventilated. Arterial and venous lines were placed via femoral cutdown, a Swan-Ganz catheter was placed into the pulmonary artery, and placements were confirmed by waveform. The abdomen was opened via midline incision, the cecum was identified, 2-3 vessels were ligated to create an ischemic insult, and the cecum was perforated with a one cm incision using electrocautery. The peritoneum was inoculated with 1 g/kg of fresh feces. Swine were observed until the onset of hypotension (MAP<60) and were then given a fluid bolus and pressor agents to keep MAP 50-60 mmHg. Animals were observed until SvO2 reached 50-60%, at which time they were resuscitated using a modified Rivers protocol. Vital signs were recorded every 10 minutes. Animals were euthanized at the end of the experiment. Results: All animals developed septic shock. A total of 87 measurements were performed. Mean cardiac output for the Vigilance CCO system was 6.80 L/min (range 1.94-14.2 L/min). Mean cardiac output for the FloTrac system was 6.90 L/min (range 2.3-15.6 L/min). Bland-Altman analysis revealed mean difference of 0.10 +/-2.87 L/min, with limits of agreement of (5.84, -5.63). Values were increasingly divergent when cardiac output was greater than 10 L/min. All animals survived to the end of the experiment. Conclusion: Cardiac output estimates from the FloTrac Vigileo system were not significantly higher than those using the Edwards Vigilance CCO system but did display significant variability in this swine model of septic shock. there is no intervention with proven long-term benefit for patients with non-shockable cardiac arrest rhythms (pulseless electrical activity and asystole). Epinephrine is recommended for treatment in non-shockable arrest but this therapy has recently undergone scrutiny. Objectives: The objective of this study was to evaluate the association between time to epinephrine administration and outcome for in-hospital cardiac arrest patients with non-shockable rhythms. We hypothesized that patients with a more rapid time to epinephrine would have better survival and neurologically intact survival. We queried the AHA Get With the Guidelines-Resuscitation database (large in-hospital cardiac arrest registry) for adults suffering cardiac arrest with non-shockable rhythms (asystole, pulseless electrical activity). The primary outcome was survival to hospital discharge. Secondary outcomes included return of spontaneous circulation, 24-hour survival, and survival with good neurologic status. Multivariable logistic regression was used to assess the relationship between time to epinephrine administration and outcomes, adjusting for age, sex, arrest characteristics, quality of resuscitation, hospital characteristics, and comorbidities. Results: We identified 29,470 adults with in-hospital cardiac arrest with non-shockable rhythms. The mean age was 68 +15, 57% were male, and 54% had asystole. Median time to first dose of epinephrine was 3 minutes (IQR: [1] [2] [3] [4] [5] . Analyzing time by 3-minute intervals, there was a stepwise decrease in survival ( Figure 1 ) and neurologically intact survival ( Figure 2 ) with increasing interval of time to epinephrine. There was also a statistically significant decrease in return of spontaneous circulation and 24-hour survival over the same time intervals (p < 0.05 for all time periods). Results were robust after multi-variable adjustment as well as sensitivity analyses. Conclusion: In patients with non-shockable cardiac arrest, more rapid delivery of epinephrine is associated with increased rate of return of spontaneous circulation, in-hospital survival, and neurologically intact survival. This is the first study to demonstrate a neurologically intact survival advantage of an intra-arrest medicinal intervention. Methods: This cross-sectional study was conducted at an academic ED with a level I trauma center Sept-Nov, 2012 . A structured, anonymous survey was administered to alert, non-trauma ED patients ! 18 years. Research associates obtained demographic and socioeconomic data, and asked, "If the chance that a CT scan will show a life-threatening injury is less than 25%, would you want your physician to order the CT?" To ascertain "Acceptable Risk" this query was posed for a chance of showing a life-threatening injury at 10%, 5%, and 2%. Patients were then asked the same series of items, yet this time they considered the scenarios assuming they or their wife/significant other were pregnant. Chi-square testing and multivariable regression were used. Background: With the growing utilization of ultrasonography in emergency medicine combined with the concern over adequate pain management in the emergency department (ED), ultrasound guidance for peripheral nerve blockade in the ED is an area of increasing interest. There have been multiple reports supporting the use of ultrasound guidance in peripheral nerve blocks. However, in order to perform a peripheral nerve block, one must first be able to reliably identify the specific nerve prior to the procedure. Objectives: The primary purpose of this study is to establish the number of supervised peripheral nerve examinations that are necessary for an emergency physician to gain proficiency in accurately locating and identifying the median, radial, and ulnar nerves of the forearm. A number of attempts of "0" means that the physician did not need any additional hands-on training sessions beyond the initial 1-hour didactic session and two hands-on instructor-guided lessons. Methods: The proficiency outcome was defined as the number of attempts before a resident is able to correctly locate and identify, 100% of the time, the radial, median, and ulnar nerve on 10 consecutive examinations. Didactic education was provided via a one hour lecture on forearm anatomy, sonographic technique, and identification of the nerves. Participants also received two instructorguided hands-on exams for each nerve prior to entering the study. Count data are summarized using percents or medians and range. Random effects negative binomial regression was used for modeling panel count data. Results: Complete data for the number of attempts, sex, and PGY training year was available for 38 residents. Nineteen male and 19 female residents performed examinations. The median PGY year in practice was 3 (range 1-3), with 10 (27%) in year 1, 8 (22%) in year 2, and 19 residents (51%) in year 3 or beyond. The median number of attempts and range for radial, median, and ulnar nerves were 1 (0-12), 0 (0-10), and 0 (0-17), respectively. The association of PGY year and sex with proficiency is summarized the table. Conclusion: The maximum number of supervised attempts to achieve accurate nerve identification was 17 (ulnar), 12 (radial), and 10 (median) in our study. The only significant association was found between years in practice and proficiency (P = 0.015). We plan to expound upon this research with an additional future study that aims to assess the physician's ability to adequately perform peripheral nerve blocks in efforts to decrease the need for more generalized procedural sedation. Objectives: Determine the optimal upper BMI limit in adults for a HFLT in the diagnosis of acute appendicitis. Methods: We performed a retrospective chart review on all patients with an admission diagnosis of appendicitis at a university tertiary referral center between 2009 and 2011. 621 patient records were reviewed and 450 were included in the final analysis. Excluded records did not have documented CT scan images and/or BMI data. Multiple anatomical measurements were taken at the cross-sectional level of the appendix and a scatter plot of distance versus BMI was generated. Distance was calculated by determining the sum of compressed subcutaneous fat thickness, abdominal wall muscle thickness, appendix width and 0.5 cm for psoas muscle identification. In a separate unpublished study we determined the subcutaneous fat compressibility was 52.2% (95% CI, 50.3% to 54.1%). This value was used to calculate the compressed subcutaneous fat thickness. To be conservative, we used 6 cm for the maximum clinically effective imaging range of a HFLT. Finally, a receiver operator curve (ROC) analysis was performed to determine the optimal upper BMI limit for a HFLT in adults with acute appendicitis. Results: Based on our ROC analysis, the optimal BMI upper limit for a HFLT was 32. A BMI of 32 or less would provide a sensitivity of 90% (95% CI, 77% to 98%) and specificity of 81% (95% CI, 77% to 85%) for patients within the maximum clinically effective imaging range (6 cm) of a HFLT. Objectives: In this study, we examine the course of urban, public hospital DKA patients by examining length of stay by DKA severity. We hypothesize that decreased severity of DKA will be associated with shorter lengths of stay. We performed a retrospective review of patients presenting to the ED at an urban county hospital with DKA between 2009 and 2011 identified by ICD9 codes. Cases were excluded if they did not meet ADA laboratory criteria for DKA or were younger than 18. Clinical and demographic data were extracted from subject charts by two medical students, and 10 percent of charts were compared to ensure consistency. DKA severity was defined by acidemia and compared to length of stay with simple linear regression. DKA severity was stratified by severe (pH<7.00,) moderate (pH 7-7.24), and mild (pH 7.25-7.30). Precipitants and demographic factors between the severity groups were examined. Data analysis was then performed with Anova and chi-square tests as appropriate using STATA. Objectives: We compared diagnosis and treatment plans before and after CT in patients with suspected renal colic. Our aim was to evaluate how often changes in diagnosis, treatment, and disposition are made after obtaining CT scans. Methods: In this prospective observational study, we enrolled a convenience sample of ED patients with suspected renal colic for whom CT was planned. Inclusion criteria were: chief complaint consistent with renal colic, most likely diagnosis is renal colic, age 18 to 50 years, and clinically stable. Primary exclusion criteria were: chronic kidney disease (Cr >2.0), urinary tract infection, recent CT (<6 months), and history of previous kidney stone. Pre-CT and Post-CT surveys were completed by the treating provider. Results: Ninety-three patients were enrolled. The discharge diagnosis was renal colic in 62 patients (67%). CT confirmed obstructing kidney stone or bladder stone in 50 patients (median size 2.5 mm) with associated hydronephrosis in 49 patients. UA showed blood in 46 (92%) of these patients. Of the 42 patients with no obstructing stones, renal colic was diagnosed in 12. Alternative diagnoses were most commonly musculoskeletal pain (11) and non-specific pain (13). There were 2 cases of important alternative diagnoses provided by CT -one ovarian tumor and one diverticulitis. After CT scan, urology was consulted for 13 patients and 7 patients had changes in disposition (all admitted). Fifteen patients were prescribed alpha-blocking medicine after confirmation of obstructing kidney stone. Sixteen providers felt that CT would not change management. In these cases, CT offered no alternative diagnosis and didn't change disposition. Seventy-seven providers thought the CT would possibly or definitely be useful. All cases of important alternative diagnoses and disposition changes were among these. The rate of abdominal CT use in admitted adult trauma patients at our Level I trauma center has declined while FAST has increased in use in the last ten years. The rising use of FAST may have played a role in the reduction of abdominal CT use. Methods: Patients 4 years of age and older presenting to the ED with suspected appendicitis were eligible for enrollment. After informed consent was obtained, BUS was performed on the subjects by trained emergency physicians who had undergone a minimum of 1-hour didactic training on the use of BUS to diagnose appendicitis. Elements of clinical history, physical examination, white blood cell count (WBC) with polymophonuclear percentage (PMN), and BUS findings were recorded on a data form. Subject outcomes were ascertained by a combination of medical record review and telephone follow-up. Results: A total of 125 subjects consented for the study, and 116 had adequate key data for final analysis. Prevalence of appendicitis was 37%. Mean age of the subjects was 20.2 years, and 51% were male. BUS was 100% sensitive (95% CI 87-100%) for detection of appendicitis, with a positive predictive value of 72% (95% CI 56-84%). Specificity was not calculated because of the large number of non-diagnostic BUS studies. Subjects with appendicitis had a significantly higher occurrence of anorexia, nausea, and vomiting, and a higher WBC and PMN count when compared to those without appendicitis. Their BUS studies were significantly more likely to result in visualization of the appendix, appendix diameter >6 mm, appendix wall thickness >2 mm, periappendiceal fluid, and sonographic McBurney's sign (p<0.05). In subjects with diagnostic BUS studies, WBC, PMN, visualization of appendix, appendix diameter >6 mm, appendix wall thickness >2 mm, periappendiceal fluid were found to be predictors of appendicitis on logistic regression. BUS success and accuracy were independent of operator, parenteral narcotic or antiemetic administration, subject body mass index, or scanning time. Conclusion: BUS is highly sensitive for appendicitis diagnosis. We also identified several components in routine ED workup and BUS that are associated with appendicitis, generating hypotheses for future studies. Conclusion: Of 18 patients found to have GS in a cohort of 220 symptomatic first trimester patients, none were found to have an EP. Further investigation, including prospective evaluation with a more generalizable physician population, may show that that the finding of GS on POCUS may be helpful in excluding EP. The operating characteristics of the interventions in diagnosing UL were calculated: sensitivity, specificity, and likelihood ratio of a positive (LR+) and negative test (LR-). We searched Pubmed and Embase databases separately using the medical subject headings "H&P + kidney stone," "UA + kidney stone" and "US + kidney stone." Studies were assessed using the Quality Assessment Tool for Diagnostic Accuracy Studies. Data analysis was performed using Meta-DiSc with a random-effects model. Conclusion: Bedside echocardiography can be used as an adjunct assessment tool to guide resuscitation. However, the decision to terminate resuscitation in cardiac arrest patients should not be solely based on echocardiography findings. Background: In the emergency department (ED), blood hemoglobin concentration is an essential test to evaluate the necessity of blood transfusion. The noninvasive hemoglobin measurement provides an immediate estimation of hemoglobin concentration noninvasively, and so has the potential to improve ED patient care. Objectives: The aim of the present study was to assess the agreement of hemoglobin levels between the novel point-of-care Masimo Pulse CO-Oximeter (SpHb)and the Sysmex XE-5000 assay in the central laboratory (tHb) in the ED setting. Methods: This prospective, observational study was conducted in the ED at the Chang Gung Memorial Hospital in Linkou, Taiwan from September 2011 to August 2012. The Pulse CO-Oximeter measurements were recorded immediately (< 10 minutes) after venipuncture from patients visited the ED. Patients with nail oil use, prominent hand tremor, and obvious finger skin lesions were excluded. Agreement between measurements was assessed by average difference (mean AESD, g/dL) from Bland-Altman analysis. Linear regression was utilized to evaluate the potential factors of the difference between SpHb and tHb. Conclusion: Lower distal finger temperature and perfusion index were found to be the factors interfering the SpHb measurement. Our data also indicated that SpHb correlated poorly with tHb values; however, no specific factor could be indicated in this study. Further optimization of this point-of-care Pulse CO-Oximeter is merited. accuracy of ultrasonography in the detection of FB, its use has been advocated because it is the only widely available and rapidly deployable modality in the detection of radiolucent FB. Narrow profile linear transducers with a "hockey stick" (HST) design are increasingly available on the compact ultrasound machines widely used in the ED. These probes might be preferable in difficult-to-access areas or for vascular access, but their accuracy compared to traditional linear-array transducers (TLT) in the detection of small FBs has not been studied. Objectives: To compare the accuracy of hockey stick and conventional linear transducers in the detection of small FBs. Methods: This was a prospective, randomized study by experienced sonologists at an academic ED. Two ED attending physicians, two ultrasound fellows, three senior ED residents, and one medical student (eight total) participated in the study. Three had performed >20 FB scans, five had < 10. Ten animal tissue models were prepared from pig's feet. Three glass and three wood FBs 1x1x5 mm in size were introduced at 1 cm tissue depth. Four sham FB sites were probed but were left empty. A TLT (HFL38xâ 6 -13MHz) and a HST (SLAxâ 6 -13MHz) were used on a SonoSite M-Turboâ. Each tissue model was scanned in a water bath with both transducers, in a non-sequential random order. Data were collected on accuracy of FB detection by probe and FB material, learning curve, sonologist experience with transducer-type, and pre and post transducer preference. Paired t-test analysis of tissue models was used. Results: On average, the accuracy of the conventional TLT vs. HsT was 57% vs. 71% (p<0.05). There was no learning effect based on the order of scans performed (p=0.9). Wood and glass were accurately identified 60% and 79% of the time, respectively. Overall, sonologists were more experienced with, and preferred, the linear transducer. Sonologists preferred the TLT in terms of comfort level (7 vs. 1) and FB detection (5 vs. 1). Background: Ultrasound is becoming increasingly commonplace in emergency medicine. Traditionally, radiography was used to identify suspected foreign bodies. Plain radiography may not identify radioopaque foreign bodies. Ultrasound has been recently shown to be sensitive for the detection of radiodense and radiolucent foreign bodies. No studies have described the sensitivity or specificity of emergency medicine residents for identification of soft tissue foreign bodies. Objectives: Describe the accuracy of residents and attending physicians in identification of soft tissue foreign bodies. A secondary objective was to determine if there was any correlation between training level and the accuracy of their scans. Methods: A live pig was used as the model for the soft tissue foreign bodies. Seven foreign bodies were implanted in the pig's abdominal soft tissue. These foreign bodies included glass, plastic, metal, and wood. All foreign bodies were approximately 1 cm in length and 1-2 mm in diameter. A total of nine areas were marked off. Seven of those areas contained a foreign body and two areas were used as a negative control. The training level of participants ranged from medical students through attendings. The total number of prior ultrasound scans (of all types) ranged from 0 to >100. The total number of participants was 24: 3 MS4, 5 PGY-1, 3 PGY-2, 4 PGY-3, and 9 attendings. Results: The overall percentage correct was 62% (SD 18%). Medical students were correct 55% of the time (SD 22%), PGY-1 55% (SD 13%), PGY-2 56% (SD 22), PGY-3 78% (SD 15%), and attendings 62% (SD 18%). There was no significant difference when comparing training levels. There was no statistically significant difference comparing the number of prior scans to percentage correct. Conclusion: Our data failed to show a significant difference in the accuracy of residents and attending physicians based on training levels. Also, there was no statistically significant difference amongst individuals with varied prior ultrasound experience. This result was surprising as logic would indicate a trend to higher accuracy with additional practice. This is likely due to the small sample size (<10 individuals in all groups). The data do not support the use of ultrasound as the only means of ruling out a foreign body. Our study was not powerful enough to detect if any difference existed in the ease of identification between the various substances. has become virtually the standard for central line placement in the emergency department. The so-called transverse approach and to a much lesser extent, the longitudinal approach, are the two "standard" approaches that have been used for this procedure. There is yet another approach, the oblique approach, which seeks to combine the benefits of both the transverse and longitudinal approaches while simultaneously minimizing the disadvantages of each. Objectives: To compare three different approaches for ultrasoundguided cannulation of the right internal jugular vein. We hypothesized that time to cannulation would be decreased and operator confidence increased using the oblique approach. Methods: Design: Randomized, prospective trial. Setting: Academic ED with affiliated residency program. Participants: 20 EM physicians ranging from PGY-1 through the attending level. Interventions: Using a portable US machine and US-compatible mannequin, each participant attempted internal jugular vessel aspiration using the three US approaches in random order. Outcomes: Total number of attempts as well as time to aspiration of blood substitute. Operator confidence that the vein and not the artery was cannulated was measured on a 10 cm visual analog scale from least to most. Data Analysis: Outcomes among groups were compared with ANOVA and chi-square as appropriate. Results: A total of 20 physicians were enrolled in the study. There was a statistically significant difference in median (IQR) time to vessel aspiration (P<0.001) among the three approaches with the longitudinal approach requiring the most time 68 (43-146) and the transverse 27 (16-83) and oblique 34 (23-59) approaches requiring the least. For the longitudinal, transverse, and oblique methods, 50%, 70%, and 80% of the physicians, respectively, required only one attempt for successful completion. Operator confidence was highest for the oblique method 9 (9-9.5) and lowest for the transverse method 7 (2.5-9) (P=0.001). The oblique approach for ultrasound-guided cannulation of the right internal jugular vein compared favorably with the more commonly used transverse and longitudinal approaches in a mannequin. Background: Computed tomography is an important imaging modality used in aiding diagnosis of a variety of disorders. Imaging quality may be improved if intravenous contrast is added, but there is a concern for potential renal injury known as contrast-induced nephropathy (CIN). Objectives: Our goal is to perform a systematic review and metaanalysis of the literature to compare the risk of nephropathy that occurs following a contrast-enhanced CT (CECT) versus a non-contrast CT. Methods: With the help of a medical librarian we searched MEDLINE and the Cochrane Database of Systematic Reviews for relevant articles using a search strategy that looked for contrast, CT scans, and renal injury. We searched the references of systematic reviews and meta-analyses on CIN for additional original studies. Included articles specifically compared rates of renal insufficiency in patients who received IV contrast versus patients who received no contrast. The authors of the individual articles defined renal injury. Statistical comparisons were made using RevMan. Results: 764 articles were initially identified. Nine studies involving 21,191 participants were included in the final analysis (two prospective, seven retrospective). Only one paper compares the number of patients requiring dialysis in each study arm. In it, 3.2% in the contrast arm required dialysis, versus 7.3% in the non-contrast arm. Meta-analysis demonstrated that CECT was not significantly associated with nephropathy when compared to non-contrast CT (OR = 0.85, 95% CI=0.63-1.14). The I2 index for assessing heterogeneity was 62%, indicating moderate heterogeneity. Conclusion: There is no difference in the incidence of renal injury between patients receiving CECT versus patients receiving non-contrast CT. Other potential etiologies of patients' renal dysfunction must be explored before labeling IV contrast as the culprit. Objectives: The objective of this study was to assess the diagnostic utility of an ultra-low-dose CT in identifying direct or indirect evidence of ueteric stones in ED patients. Methods: Patients who were undergoing CT evaluation for kidney stones were consented from the ED. Patients underwent a CT scan at both 90% and 10% of the usual radiation dose. The 90% scan was utilized clinically. Three radiologists blinded to clinical data read both scans and commented on direct evidence of ureteric stones or indirect evidence of stones, (hydro-ureter, or hydro-nephrosis). The gold standard for an ureteric stone was either a stone seen by ! 2 study radiologists and the clinical read or unblinded adjudication if discrepancy between ! 2 study radiologists. Study radiologists' agreement with the gold standard was determined for both scans. Descriptive statistics were utilized to represent the data. Results: Forty-three patients were recruited with complete data sets; of these there were 23 classified as stones. Study radiologists all agreed with the gold standard in 81.4% and 2/3 in 11.6% in high-dose scans. In the low-dose scans, all radiologist agreed with standard 20.9% and 2/3 in 48.8% and in only 7% did 0/3 agree. Individually, the radiologists had relatively high accuracies (95.3%, 92.9%, and 90.5%) in 90% scans, while in the 10% scans, the accuracies dropped to 39.5%, 69.0%, and 78.6%, respectively. The sensitivity and specificity were 88.1% and 98.2% in the 90% scans, and 64.2% and 61.4% for the 10% scans. The combination of direct or indirect evidence for stones revealed individual accuracies of 95.3%, 90.5%, and 85.7% in the 90% scan and 55.8%, 66.7%, and 76.2%, respectively for the 10% scans. The sensitivity and specificity for the combination of direct or indirect evidence were 86.1% and 97.8% in the 90% scans, and 69.6% and 60%, respectively for the 10% scans. BMI, stone size, and location did not affect accuracy of CT read. The overall accuracy of the ultra-low-dose scan for either the direct or indirect evidence of ureteric stones is fair. We noted significant variability between study radiologist interpretations. Objectives: To identify factors associated with the need for CSCT among patients undergoing initial x-ray for c-spine trauma as a step toward predicting the need for initial CSCT in low-to moderate-risk trauma. Methods: We reviewed records for trauma patients > 18 years old in an urban academic Level I trauma center from March 2008-2010, comparing patients with CSCT following initial x-rays (n=547) to a random sample of those with x-rays alone (n=652). Exclusions included non-trauma visits or CT prior to arrival. We performed explicit record review including comorbidities, medications, mechanism, emergency severity index (ESI) score, demographics, and radiologic findings. We performed univariate analysis with the chi-square test for categorical variables and simple logistic regression for continuous variables. We evaluated inter-rater reliability using the kappa statistic. Results: No significant differences in race or sex were observed between groups. Variables associated with CSCT included acuity, mechanism of injury, age and weight ( Table 1 ). The most common x-ray findings among patients requiring CSCT were inadequate visualization, manifestations of degenerative disease (DJD), and malalignment ( Table 2 ). Inter-rater reliability was moderate to strong (k=0.79 for medications, k=0.67 for comorbidities). The most common x-ray findings in patients requiring CSCT were poor visualization, DJD, or malalignment. Increasing age, weight, acuity, and mechanism of injury were associated with the need for CSCT. These potential predictors require further evaluation with multivariate regression analysis and validation to determine whether a clinical decision rule can aid in imaging efficiency. Objectives: The aim of this study is to quantify and characterize lawsuits related to emergency physicians performing POC US. The WESTLAW database is a repository of case law, state and federal statues, public records, and other information sources. It is one of the main search engines used by legal professionals. The Westlaw database was retrospectively reviewed for reported cases in U.S. and federal courts from January 2008 to August 2012. Search terms included sonograph, ultrasound, and variations of these words. These terms were searched within 250 words of "emergency" also within 10 words of "physician" or "doctor." Cases were reviewed by emergency physicians with advanced ultrasound training. Cases were included if an emergency physician was named, the patient encounter was in the ED, the interpretation or failure to perform an ultrasound was a central issue, and the application was within the ACEP core applications. Results: A total of eight cases were identified. Of these, two cases were related to failure to perform POC US examinations. In one case, it was alleged that physicians failed to perform a FAST examination. In another case, it was alleged that a DVT US could have been performed. Both of these patients died within days of ED discharge. Two other cases were related to failure to perform US in a timely manner. In both of these cases, the US studies were ordered through radiology, but fall under the ACEP core applications and could have been performed at the bedside. The remaining four cases were misdiagnoses by other providers performing US exams, two by obstetricians and two by radiologists. Objectives: The objective of this study was to compare the quality of images obtained during a FAST examination using a pocket-sized ultrasound machine with those from a larger cart-based machine typically used in the ED. Our hypothesis was that the images from the larger machine would be of higher quality. Methods: This was a prospective observational trial, conducted in the ED of a university tertiary care referral center. The study population consisted of a convenience sample of adult ED patients presenting with abdominal complaints. Patients were enrolled by emergency medicine residents who were at the PGY-2 level and above. Images of the standard four views of the FAST exam were obtained using both the pocket-sized machine (GE Vscan) and the standard machine (Zonare z.one ultra sp). Depth and gain settings were maintained as consistently as possible. All images were obtained under the supervision of a RDMS-credentialed physician. Images were rated by three experienced sonographers, using a 1-10 scale with 10 representing the highest quality, for detail (DET), resolution (RES), and total image quality (IQ). These parameters have been previously defined and studied. For each patient, we calculated the mean difference (Zonare minus VScan) by averaging scores from all reviewers from all views from both machines. We used the t-distribution to calculate 95% CIs. We chose a sample of 28 patients to have a 90% probability to detect a 10% difference. The Conclusion: Our meta-analysis suggests that POC ultrasound is moderately sensitive in detecting the presence of hydronephrosis in ED patients with nephrolithiaisis. POC US could be used as a part of a clinical algorithm in the evaluation of ED patients presenting with acute flank pain. This could potentially reduce the use of CT in the ED evaluation of patients with suspected nephrolithiaisis. Results: Eight articles were selected for primary analysis. Random effects meta-analysis showed that US guidance improved success when compared to traditional techniques (pooled OR 3.58 (1.79, 7.19) ). There was no significant heterogeneity between studies (p=0.151). The figure shows a forest plot showing the OR for success for each study as well as a summary. Nine articles were selected for secondary analysis. Secondary analysis of time to cannulation showed no significant decrease in the US-guided group with a weighted mean reduction for US-guided techniques of -1.91 (-5.95, 2.12) . There was significant heterogeneity between these groups. Number of punctures required for successful cannulation were less in the US-guided group (weighted mean reduction of -0.50 (-0.78,-0.23)) though there was significant heterogeneity between the groups. Results: A total of 2037 articles were retrieved, of which only 10 contained quantitative data on earthquake-related pediatric injuries and were used in the final report. All studies were retrospective, had different upper limits for the pediatric age group ranging from 14 to 18 years, and reported injuries using heterogeneous categories and classifications. Two studies reported patterns of injury for all pediatric patients, including both those who were eventually admitted or discharged. Seven articles described the injuries by anatomic location, five articles described injuries by type, and only two articles described injuries using both systems. Fracture rate, reported in six articles, ranged from 15%-55%. Head injury rate, reported in five articles, ranged from 2%-61%. Conclusion: Differences in age group definitions of pediatric patients, and in the injury classification system, contribute to the Objectives: We performed a randomized clinical trial between the paper-based disaster triage (PBT) system and the computer-based triage (CBT) system used in day-to-day operations. Our hypothesis was that CBT was more efficient than PBT in a disaster. The study was a randomized clinical trial, which took place as simulated adult and pediatric patients conducted at Kings County Hospital Center in Brooklyn, NY on 11/10/2011. Forty simulated patients were randomized to two groups of 20 assigned to either CBT or PBT by two experienced ED triage nurses. The study's primary outcome was the time needed to triage. The secondary outcome of the study was errors made in triage. Major errors were defined as errors affecting triage disposition, while minor errors did not affect final disposition. We present the data as means with SD. The primary outcome was compared using Student's t-test. The secondary outcome data consisting of number of content errors in each group is reported using descriptive statistics. Results: Nurse A completed CBT in 28.43 minutes and PBT in 16.12 minutes. Nurse B's trial times were 29.45 minutes with CBT and 25.47 minutes with PBT. While the average trial times were lower in PBT, this was not statistically significant (Table 1 ). There were a total of 10 major errors in PBT and 1 major error in CBT. A total of 23 minor errors were found in PBT and 2 in the CBT trials (Table 2) . showed no significant benefit as compared to PBT, the error rate with PBT was markedly higher. The lack of error rates in the CBT system may also demonstrate that the triage nurses unfamiliarity with the paper disaster forms may have contributed to the error rate in the paper group. Furthermore there was no adjustment made for time that would be needed to correct or otherwise address the major and minor errors in the PBT. Further study is necessary to examine the effect of these errors in ED throughput time and time to accurate triage Background: Twitter is a social network based on short messages that can be read, sent, and shared on desktop computers, laptops and mobile devices, over ethernet, wi-fi, or cellular networks. Because of this resiliency, Twitter has emerged as a useful tool for disseminating information to the public, before and during disasters. Advertisers have studied the reach and effectiveness of Twitter messages ("tweets"), describing characteristics that increase the chance the messages will be shared ("retweeted") by users. But little has been published on the effectiveness of tweets from public officials in times of emergency. Retweeting these messages is a way to enhance public awareness of potentially important instructions from public officials in a disaster. Objectives: To determine the characteristics of New York public officials' most-shared messages before and during the Sandy storm. Methods: Twitter's publicly available programming interface was queried from 10-27-12 to 11-02-12, to return tweets and retweet counts from public officials such as @MikeBloomberg and @NYGovCuomo. Lexical diversity was calculated. We also analyzed all retweets containing the hashtags #sandy and #nyc, which Twitter users included to help identify New York City storm-related messages. Results: Of 50,014 tweets matching search parameters before, during, and immediately after Sandy hit New York City, 3242 were messages from public officials that had been shared (retweeted). New York City Mayor Michael Bloomberg and staff, tweeting from @MikeBloomberg, on three occasions had messages each retweeted over 100 times. New York Governor Andrew Cuomo and staff, using @NYGovCuomo, had several tweets retweeted 22-52 times during the analyzed period. The lexical diversity of these official tweets was similar (2.25-2.49 ) and well below the average for non-official tweets mentioning #sandy and #nyc (3.82) . Most official tweets with substantial retweets included a URL for further reading. Conclusion: Officials using Twitter conveyed actionable information during the Sandy storm, which was shared well beyond existing subscriber bases and potentially improved situational awareness and disaster response. These tweets, characterized by a lower lexical diversity score than other city-and Sandy-related tweets, were likely easier to understand, and often linked to further information and resources. Results: The HVAs demonstrated that at all CiH sites there is an extreme lack of resources, with most children living at poverty level. The HVAs revealed a severe deprivation of basic human needs, including food, safe drinking water, sanitation facilities, health care, shelter, and education. In some centers this results not only from lack of resources but also access to services. In our observational and relative risk assessment at each facility, some keep children healthier with simple changes of practice. At each site we noted observed numerous physical hazards which could be easily mitigated with limited resources. The HVA can assist non-governmental organizations like CiH in developing a prioritized and sustainable improvement plan for Haitian orphanages. Over time, we hope environment, daily life, and the future for children of Haiti improves despite complex issues facing the country as a whole. Objectives: This preliminary analysis was performed to support a larger effort investigating the association of airway choice on functional status after OHCA. We hypothesized that there would be no difference in functional outcome in this small unselected population. Methods: Subjects were lay bystanders, >18 years old, recruited at a Tucson, AZ shopping mall. Subjects were randomized into two groups: (1) UBV group: watched a 60-second UBV on CCO-CPR; (2) Control Group: sat idle for 60 seconds as a control (CTR). Subjects were then taken to a private area with a Laerdal Skillreporter mannequin and read a scenario concerning a sudden collapse in the shopping mall. They were instructed to do what they thought was best for the scenario. Subjects' performance was recorded for 2 minutes including responsiveness (i.e. time to start of compressions and calling 9-1-1) and CPR performance data (i.e. compression depth, compressions/min (cpm), and hands-off time). Results: Fifty-one subjects, with no CPR training in the previous 24 months, were enrolled with 24 viewing the UBV (47.1%). Subjects' demographic data was similar between the two groups (age, sex, race, and whether they had heart disease/heart attack or lived with someone with risk of heart attack Objectives: To strategically and systematically place AEDs in a community by identifying areas of mismatch (incidence of OHCA is high and few AEDs are located), to ultimately increase AED utilization in the community. Methods: Design: Secondary analysis of the Denver, Colorado component of the Cardiac Arrest Registry to Enhance Survival (CARES) dataset and AED registry. Setting: Large urban metropolitan community with a two-tiered EMS system. The catchment area for this community is approximately 150 square miles with an approximate residential census of 600,000 people and includes 10 adult acute-care receiving hospitals. Population: Consecutive arrests from January 15, 2009 through December 31, 2011 entered by Denver Health Paramedic Division staff into the CARES dataset. Data Analysis: All arrests and existing publicly available AEDs were geocoded and mapped by neighborhood. Areas of mismatch, where the ratio of arrests to AEDs was lower than expected, were identified as possible neighborhoods for AED placement. A neighborhood analysis was then conducted in conjunction with community-based partners to: (1) understand the underlying demographics of the neighborhood; (2) identify sites for possible AED placement based on community recommendations; and (3) approach potential sites to assess desire for hosting an AED. Results: Twelve neighborhoods in three areas of Denver (Northeast, East, and West) were identified as mismatch areas. Based on community input and geographic distribution, 25 locations were identified as primary sites for AED placement in the community, with an additional 7 locations identified as secondary sites. Within two months of project completion, 12 devices have been placed in these neighborhoods. been associated with return-of-spontaneous-circulation (ROSC) but not long-term survival. A recent retrospective study reported a greater likelihood of ROSC when vasopressors were administered within the first 10 minutes of arrest. However, it is unlikely that this relationship is a binary function (i.e., 10 vs. >10 minutes). More likely, this relationship is a function of time measured on a continuum, with diminishing effectiveness even within the first 10 minutes of arrest, and potentially, some lingering benefit beyond 10 minutes. However, this relationship remains undefined. Objectives: To assess the likelihood of ROSC as a function of the time interval between call-receipt and first vasopressor administration measured on a continuum. We conducted a retrospective study of cardiac arrest using a statewide prehospital database. All adult patients suffering witnessed, non-traumatic arrests between January and June 2012 were included. Chi-square and t-tests were used to analyze the relationships between ROSC and call-receipt-to-vasopressor-interval (CRTVI); patient age, race, and sex; endotracheal intubation (ETI); AED use; first presenting cardiac rhythm; and bystander CPR. A multivariate logistic regression model calculated the odds ratio (OR) of ROSC as a function of CRTVI while controlling for statistically significant variables from the univariate analyses. Results: Of the 1,150 patients meeting inclusion criteria, 518 (45.0%) experienced ROSC. ROSC was less likely with increasing CRTVI (OR=0.95, p<0.01). Compared to patients with shockable rhythms, patients with asystole (OR=0.36, p<0.01) and PEA (OR=0.57, p<0.01) were less likely to achieve ROSC. Bystander CPR was a predictor of ROSC (OR=2.4, p<0.01), whereas other variables were not. Conclusion: Time to vasopressor administration is significantly associated with ROSC, and for every one-minute delay between callreceipt and vasopressor administration, the odds of ROSC declines by 5%. Similar to previous studies, we observed an increased likelihood of ROSC among patients presenting with shockable rhythms and receiving bystander CPR. These results support the idea of a time-dependent function of vasopressor effectiveness across the entire range of administration rather than just the first 10 minutes. A Objectives: To validate whether a GWR <1.2 indicates poor survival post arrest and determine the inter-relater reliability among reviewers. We also sought to develop a novel GWR that may be simpler to use in practice. We hypothesize that a GWR <1.2 is universally fatal in post cardiac arrest patients. Methods: Retrospective analysis of post cardiac arrest patients admitted to a single center from 2008 to 2012. Inclusion criteria were age >18 years, non-traumatic arrest, and a brain CT scan within 24 hours of ROSC. Three independent physician reviewers measured attenuation in pre-specified areas. Simple descriptive statistics were used to describe the study population. Cohen's kappa test was used to determine interclass correlation between the three physicians who read the CT scans. Results: We evaluated 300 patients and 91 met inclusion criteria. Mean age was 64 years (SD +/-17); 69% were male. In-hospital mortality was 63% and 59% of patients received therapeutic hypothermia. Of 91 patients, 12 were excluded due to a technically inadequate study, leaving 79 CTs for evaluation. For the validation measurement, the interclass correlation coefficient was 0.70. A cut-off value of 1.1 was also examined. For reviewer 1, only one patient with a GWR<1.1 survived. Reviewer 2 and 3 had no patients with GWR <1.1 that survived. Conclusion: A GWR <1.2 on CT imaging within 24 hours after cardiac arrest was associated with a high degree of mortality and poor neurologic outcome; however, this was not associated with 100% nonsurvival as previously reported. We caution against implementation based on previous investigations until reasons for disparate findings are elucidated. A threshold < 1.1 may be a safer cut-off to identify patients with negligible chance of neurological survival. Interclass correlation among reviewers was moderately good. Our experimental model did not perform well enough to be used for prognostication. Objectives: To examine compliance of EMS CPAP protocol initiation in one jurisdiction and to determine the potential for expanding a prehospital CPAP protocol into a neighboring EMS jurisdiction currently without a protocol. Methods: This was an observational, retrospective data collection study from a large urban teaching hospital. Data were abstracted from the EMS registry data and hospital charts from March to December, 2010. All patients with discharge diagnoses of acute decompensated heart failure who were transported by local advanced life support EMS and triaged immediately to our ED resuscitation module were included. Data collected included demographics, heart rate, respiratory rate, blood pressure, oxygen saturation, and use of CPAP. Unpaired t-test was used to compare quantitative variables; chi-square to evaluate qualitative variables. Results: There were 160 patients reviewed from the registry; 12 were excluded due to incomplete charts. There were 61 patients from the jurisdiction with the EMS CPAP protocol. Twenty-two patients received CPAP and 39 did not. Of those who did not, 22 were eligible based on the criteria in the protocol with no documentation as to why CPAP was not given. The initial systolic blood pressure (156.1 vs. 181.0, p=0.03) and oxygen saturation (73.0 vs. 84.4, p=0.003) was lower in the eligible group that received CPAP compared with those that did not receive CPAP. No significant differences were found in terms of sex, race, age, initial diastolic blood pressure, heart rate, or respiratory rate. Of the 87 patients from the jurisdiction without an EMS CPAP protocol, 55 (63.2%) received immediate CPAP in the ED, and all would have met inclusion in the standard EMS CPAP protocol. Conclusion: Compliance with the EMS CPAP protocol was poor. Over half of the patients who did not receive CPAP were eligible and documentation as to why CPAP was not initiated by the EMS provider was lacking. Initial oxygen saturation was higher in eligible patients not given CPAP although this still represents hypoxia. The EMS CPAP protocol could be expanded to the neighboring jurisdiction as the majority of patients were eligible. Barriers to protocol implementation need to be identified. Objectives: We assessed the feasibility of using the EMSTARS database for prehospital quality improvement by extracting data on prehospital airway management. Additionally, we describe the frequency and success rates of various airway management procedures and devices. Methods: This was a cross-sectional study of an existing state database (EMSTARS) that contains numerous data fields that describe prehospital patient care. The database is managed by the Bureau of EMS, within the state Department of Health. There were 3,158,580 valid patient records derived from 144 EMS agencies in the EMSTARS database. An aggregate report of the total number of patients with any airway procedure performed from January 1, 2008 to through August 2011 provided the basis for this analysis. The primary outcome was the ability to extract the necessary data fields to evaluate prehospital airway management. The fields included the frequency and success rates of different types of prehospital airway procedures. Additionally, we extracted the frequency of end-tidal carbon dioxide use in reported successful endotracheal intubations, and evaluated ETCO2 parameters in cardiac arrest patients. Results: The total number of airway procedures reported was 21,738, with frequency and success rates detailed in the table below. Objectives: To descriptively report time to endotracheal tube placement by paramedics in real patients treated in the prehospital setting. Methods: This post-hoc analysis is from an IRB-approved, multiagency, prospective, non-randomized, cross-over trial comparing success rates and complications for two video laryngoscope systems (Storz CMACâ, Macintosh #4 blade; King VISION TM , Size 3). Paramedics were instructed to record all advanced airway placement attempts with the CMAC device. Research staff downloaded recorded placement attempts and measured the following placement intervals (seconds): 1) blade in mouth to passage of the ET tube past the vocal cords, or 2) blade in mouth to removal of blade when ET attempts were unsuccessful. First attempt and overall attempt success rates were descriptively reported. Median times and interquartile ranges to successful or unsuccessful placement attempts were calculated. Results: Thirty-four of 66 (52%) CMAC placement attempts during the study period had recordings available for initial review. Eleven recordings were subsequently excluded (four still images only, seven poor video quality), leaving 23 recordings for evaluation. First attempt success rate was 61% (14/23), with an overall success rate of 83% (19/ 23). The median first attempt time to successful placement was 33 seconds (IQR 17.5-64.5), with a median second attempt time to successful placement of 35 seconds (IQR 31-83). The median time for unsuccessful first placement attempts was 46 seconds (IQR 28-80). Only three of the six cases with more than one placement attempt had video available from both placement attempts (median total time = 85 seconds). Conclusion: This is the first report of independent, objectively measured paramedic time to advanced airway placement using a video laryngoscope system in the prehospital setting. Recording video laryngoscope systems appear to be a valuable tool in assessment of intubation times in prehospital medicine. 3.0, 9.8) , and positive initial troponin (OR 1.4, 95% CI 3.0, 10.7) for going to PCI, but none of these was significant for actual positive findings at PCI. Our data suggest that prehospital suspicion of STEMI has potential to improve patient selection for PCI, but predicting which of these patients will have a positive PCI remains elusive. The Objectives: To determine the effects of a Maryland EMS protocol recommendation and subsequent requirement that trauma patients be taken by ground if within a 30-minute drive time to a trauma center, followed by a requirement for physician consultation for certain categories of trauma patients. The Maryland State Police HEMS med-evac computer aided dispatch (CAD) database was queried for total number of in-state scene flights from January 1, 2000 -December 31, 2011. The number of flights was then evaluated against both the recommended, and ultimately required, EMS protocol that trauma patients be taken by ground if less than a 30-minute drive from a trauma center, followed by required physician consults for certain categories of patients. Results: Figure A demonstrates the recommendation for ground transportation occurred just prior to a sustained drop in flights from counties located less than 30 minutes from a trauma center (Point A). The total number of flights statewide per year, however, remained level at 5099. Counties located further from a trauma center experienced a drop in flights when this recommendation became a requirement (Point B), with total flights dropping from 4426 to 3395 (23% less). A state helicopter crash and the requirement for physician consultation prior to helicopter transport of trauma categories C (mechanism of injury) or D (co-morbid conditions or provider discretion) patients occurred just prior to the statewide decrease (Point C) to 1990 flights (41% less). Figure B shows that helicopter usage rates per 1000 Marylanders decreased from 0.92 to 0.35 (62% decrease) during the period in which these three statewide protocols were implemented. Conclusion: Modifications to state protocols were associated with changes in the patterns of helicopter usage. Protocols that mandate physician consults for air medical transport or require ground transport based on distance from a trauma center were associated with a decreased number of flights. Results: 119 patients with confirmed MIs were identified. Ten of 119 (8.4%) MI patients were undertriaged to a non-emergent response. Seventy-one (61%) had chief complaints of chest pain and 48 (39%) had chief complaints other than chest pain, demonstrating a high rate of atypical cardiac presentation. None of the 71 (0%) chest pain patients were undertriaged, but 10 of the 48 (21%) non chest pain patients were undertriaged as alpha, P< 0.0001, RR = 1.27 (95% CI = 1.10-1.47). Of these ten patients, 8 (80%) fell under the chief complaint of "sick person," 9 (90%) were over the age of 50 (range 37-93), and 6 (60%) were male. Conclusion: Patients with MI who present atypically are undertriaged as non-emergent by MPDS at a significant rate. Further investigation into the chief complaint "sick person," the most common presentation of those receiving non-emergent response, is warranted to determine if additional risk factors can be identified to improve MPDS triage of these high risk patients. Objectives: To identify factors associated with delay from EMS arrival to out-of-hospital ECG in patients with complaints suggestive of acute coronary syndrome (ACS). Methods: This was a retrospective cohort study of 54,579 people screened for inclusion in the IMMEDIATE Trial (a randomized controlled trial of an experimental drug initiated in the out-of-hospital setting for participants with likely ACS) by 36 EMS systems in 13 US cities. People receiving out-of-hospital ECGs during the enrollment period were included in the screened cohort. Twelve percent of the screened cohort was excluded due to missing data, as were 349 participants < 18 years old. Data on EMS arrival time, out-of-hospital ECG time, sex, age, and primary complaint were collected. Hierarchical multivariable regression models were used to assess differences in time from EMS arrival to ECG by sex, age, primary complaint, and weekend or night enrollment, controlling for clustering of data at the site level. Linear regression was used to model elapsed time (in minutes) from EMS arrival to ECG. Logistic regression was used to analyze outcome of time to ECG greater than 30.2 minutes, representing a 15-minute delay beyond the median time to ECG in this study. Results: Shown in the table, women, older participants, and those without chest pain had higher odds of delay from EMS arrival to outof-hospital ECG. Analyses of raw times were consistent with these results. Conclusion: : An identified group of "EMS super-users" accounted for an inordinate number of ambulance transports. These individuals were predominately male, homeless, and/or alcohol abusers. These data support efforts to place these identified patients in detox programs and also evaluate alternatives to ambulance transport to emergency departments. Results: There was 100% concordance between ED diagnosis of anaphylaxis and NIAD/FAAN criteria. Of 97 cases with anaphylaxis, a correct ambulance diagnosis of anaphylaxis was made in 69 (sensitivity 0.71, 95%CI 0.614-0.792). Of the 28 false negatives, 12 (43%) were given the diagnosis "allergy related" and none received epinephrine from ambulance staff; 15 (54%) were given epinephrine on arrival in the ED. A diagnosis of anaphylaxis was made in 77 who did not have anaphylaxis, giving a positive predictive value (PPV) of 0.454 (95%CI 0.393-0.553). In these 77 false positives, an allergy-related diagnosis was given by ED staff in 56 (73%), and 21 (27%) were given epinephrine by ambulance staff. One of these presented with a mild rash and suffered an adverse event (fast atrial fibrillation) after being given epinephrine. Conclusion: Diagnosis by paramedics was poor. The distinction between a diagnosis of "allergy-related" versus anaphylaxis was important because treatment with epinephrine followed the latter and because there was evidence of both under-treatment requiring urgent intervention on arrival in hospital and over-treatment causing avoidable harm. Results: There were 84 patients in each cohort. Most patients in each cohort had injuries potentially resulting in uncontrolled internal hemorrhage (87% adult, 77% pediatric, p = 0.158). There was no difference between cohorts in the presence of abnormal mental status or peripheral pulse in the prehospital setting. A prehospital fluid bolus was given to 7% of adult and 1% of pediatric patients (p = 0.117). An ED fluid bolus was given to 27% of adult and 7% of pediatric patients (p = 0.001). Pediatric patients had prehospital IVs placed less frequently (41% vs. 71%, p 0.001). The protocol was followed in 96% (n = 81) of pediatric and 89% (n = 75) of adult patients (p = 0.131). Giving a bolus when not indicated occurred in 6 of adult patients and 1 pediatric patient (p = 0.117). Pediatric patients were discharged from the ED more often (50% vs. 24%, p = 0.001) and less often admitted to the intensive care unit or operating room (18% vs. 45%, p 0.001). Conclusion: Prehospital provider compliance with a restrictive fluid resuscitation protocol for trauma was not different between adult and pediatric cohorts in this single EMS agency-based study. Management and disposition differed between groups after arrival to the trauma center. proposed as underlying physiologic derangements in subjects at high risk for arrest-related death. The effect of conducted electrical weapons (CEW) on these variables has been studied, but there has not been an attempt to find an "equivalency" with exercise. Objectives: Here we attempt to determine a sprint that is "equivalent" in markers of acidosis and catecholamines to a TASER X26 CEW exposure. Methods: Blood was drawn immediately before the intervention, and then at 0, 2, 4, 6, 8 and 10 minutes after and analyzed for pH, lactate, and serum catcholamines. Sprint subjects were split into three groups for analysis: 20 yard sprint (group 1), 30-50 yard sprint (group 2), and >50 yard sprint (group 3), and compared to a 5-second TASER X26 exposure (group 4). Medians of each group were compared at set time points after sprinting or exposure using the K-sample equality of medians test. Differences within each group were compared across set time points using the Wilcoxon sign-rank test. Results: Thirty-seven subjects were enrolled. At baseline, the pH medians in the four groups were not different (p = 0.715). At all subsequent time points the median pH was different between the groups (P<0.001). The pH had decreased at 0, 2, 4, and 8 minutes in groups 1, 2, and 3 (P<0.001). At 6 minutes all the groups were significantly different than baseline (P<0.001). At 10 minutes, the pH of groups 2 and 3 was significantly different than baseline (P<0.01). At baseline total catecholamines were not different between the groups (p=0.381). At 0 and 2 minutes after the exposure, the total catecholamine medians across groups were significantly different (p = 0.025). There was no difference in catecholamines at 4, 6, 8, or 10 minutes. At 0 and 2 minutes after exposure, total catecholamines were elevated from baseline in groups 2 and 3 (p<0.001). Conclusion: A 5-second CEW exposure effects markers of acidosis and stress less than or equal to a 20-yard sprint. EMT-Basic Acquisition and Objectives: This study's purpose is to evaluate if EMT-Basics without prior 12-lead ECG acquisition and telemetry transmission training can attain and maintain those skills using a novel 12-lead ECG device (ReadyLink TM ). Methods: Fifty-eight EMT-Basics without prior 12-lead ECG acquisition and telemetry transmission experience were enrolled. Education regarding 12-lead ECG acquisition and telemetry transmission utilizing the study device (ReadyLink TM ) was delivered in a classroom setting and consisted of a 25-minute training video. Study subjects were then allowed hands-on practice with the device for up to 15 minutes. Using volunteer human subjects to simulate patient encounters, study subjects acquired and telemetry transmitted 12-lead ECGs to a study website using a proprietary transmission system (LifeNet TM ). 12-lead ECG acquisition and telemetry transmission evaluations were conducted on the initial day of training, approximately 90 days post initial training, and approximately 180 days post initial training. Study subjects did not utilize 12-lead ECGs in actual patient encounters during the study period. All 12-lead ECGs acquired and telemetry transmitted to the study repository were reviewed by an emergency physician to determine if the quality of the 12-lead ECG was clinically useable for determining activation of emergency interventional cardiology services. Results: During the 6-month study period, the EMT-Basics involved acquired and telemetry transmitted 132 12-lead ECGs. 130 (98.5%) 12lead ECGs were determined through emergency physician overread to be clinically useable for determining activation of emergency interventional cardiology services. Extreme motion artifact prevented clinically useable data in the remaining 2 (1.5%) 12-lead ECGs. Conclusion: EMT-Basics, inexperienced in 12-lead ECG acquisition and telemetry transmission, are able to successfully achieve clinically usable 12-lead ECG acquisition and telemetry transmission using the ReadyLink TM device. Objectives: The primary objective is to better understand patients' rationale for using an ambulance versus other means of transport to the hospital. We hypothesized that there would be no differences in the rationales between the two groups in how they selected transport to the ED. Methods: A convenience sample of patients was consented by research associates from 8am to 12am, 7 days a week, at two urban EDs. Subjects were enrolled any time after initial triage through ED discharge. Subjects were given a list of possible motivations and asked to select all that influenced their chosen method of transportation to the hospital. Results: A total of 5034 subjects were enrolled, 1674 using EMS services and 3360 presenting as walk-in (alternate transport). Age, race, and insurance status were similar between the two groups. In the EMS group 73.5% of those surveyed believed their medical condition required EMS versus 52.6% of the walk-in cohort (p<0.001). The EMS cohort was much more likely to be triaged urgent versus non-urgent compared to the walk-in cohort (81% v. 32%, p<0.001). Both cohorts reported equally that they would choose to travel via the same method again (80% v. 81%, p=0.174). Cost (98% v. 84%, p<0.001) and perception of faster time to transport (16% v. 22%, p<0.001) showed small differences in the cohorts. A substantial minority of those transported by EMS felt this would affect how quickly they were seen by a doctor (43%) and that their doctor knew how they arrived (35%). Among those who used EMS, there was a significantly lower number of uninsured patients who believed their condition required EMS (68.7% vs. 75.6%, p <0.001) when compared to the insured group. Conclusion: Patients endorsed surprisingly equal confidence in their transport choice. Those who arrived using EMS were triaged to higher acuity, suggesting more severe pathology. However, additional factors, including the belief that they would be seen faster and insurance status, suggest other motives in EMS usage by patients. Objectives: To determine factors that are associated with in-hospital mortality in patients who present with prehospital hypotension. Methods: Cohort study of all patients who presented to our Level I trauma center with prehospital hypotension defined as SBP<100 mm Hg. The outcome was mortality during the index ED visit and hospitalization. We used the "AVPU" scale (patient is Alert or responds to either Verbal or Painful stimulation or is Unresponsive) to determine alertness at the scene. We defined improvement of SBP as an increase of 10 mm Hg or more at ED triage. We collected demographic and clinical variables and analyzed univariate and multivariate associations with the outcome. We used a stepwise regression model that included clinically important variables and other factors that were associated with mortality in the univariate analysis (P<0.20) to arrive at the final model. We used t-test or chi-square as appropriate. Results are expressed as odds ratios with 95% CI. Results: 350 patients presented between 7/1/10 and 6/30/11 with prehospital hypotension. 177 (51%) were female, 85(24%) were age (years) < 55, 145 (41%) were 55-79, and 120 (34%) were 80 or older. Prehospital systolic blood pressure ranged from 40 to 98 mm Hg with a median of 80 (IQR 74, 90). Forty (11.4%) expired during the admission. Among 253 (73%) patients whose SBP had improved at triage, 21 (8.3%) expired during the admission. Factors associated with mortality in the multivariate model were: decreased level of consciousness at the scene by "AVPU" scale (OR 5.1, 95% CI 2.4, 10.7) and failure of SBP to improve at least 10 mm Hg by ED triage (OR 3.1, 95% CI 1.5, 6.5) Age category, sex, and past medical history were not statistically significant. Conclusion: Patients who present with prehospital hypotension and who were not alert at the scene and whose SBP does not improve by ED triage have a high risk of mortality and should be prioritized. challenges to receiving health care in the USA. Beyond well-described economic and language barriers, a general lack of familiarity with the system and its policies may make it difficult for them to access medical care for even the most basic problems. knowledge and beliefs about health care workers' reporting (nonreporting) of illegal immigrants in the emergency department. Methods: Three cohorts (Latino citizens, Latino non-citizens, and non-Latino citizens) were enrolled in equal numbers. Subjects providing verbal consent to participate in a brief interview while in the ED without delaying treatment. The survey was completely anonymous and confidential. Investigators also administered the same survey at two EDs in California and comparison of results across all three sites was performed. All reported differences were statistically significant (p <0.05). Results: There were several differences identified between the cohorts. Insurance patterns varied between study cohorts and group differences varied between states. Beliefs held about the treatment of non-citizens varied between cohorts. At the ED in Texas, non-Latino citizens were the most likely to report belief that the health care system treats legal and non-legal immigrants differently and Latino non-citizens also reported the greatest measures of satisfaction with their care. Responses that varied between states included legality of immigration status, Latino non-citizens' stated alternatives for treatment, and non-Latino citizens' belief in treatment inequalities, among other items. Measures of fear in the Latino non-citizen cohort were minimal at the Tex-as site; proportions of respondents reporting fear pertaining to health care utilization were greater in the cohorts collected in California. providers and administrators to adapt the current infrastructure and management models to provide emergency medicine (EM) access. Many emergency departments (ED) label themselves as "geriatric EDs", yet the core elements of geriatric EM management remain undefined. A joint effort by the SAEM Academy of Geriatric Emergency Medicine, the American College of Emergency Physician's (ACEP) Geriatric Section, and the American Geriatric Society produced a white paper to delineate these core elements. Objectives: To evaluate the content validity of the white paper via a survey of representative EM providers. Methods: All attendees at the 2012 ACEP Geriatric Section annual meeting completed a 44-item electronic survey via the Turning Pointâ audience response system. The survey included 11 demographic questions and 33 Likert scale questions assessing different domains of the white paper. The domains included staffing, discharge processes, education, quality improvement (QI), and infrastructure. Objectives: To characterize infectious precipitants of DKA and to identify clinical and laboratory differences between infected and noninfected DKA patients. Methods: This is a retrospective case series of ED patients with DKA at a large urban county hospital from 2009 to 2011. Potential cases were identified using ICD-9 codes suggestive of DKA or hyperglycemic crisis. All cases were reviewed, and those that did not meet ADA laboratory criteria for DKA and age <18 were excluded. Clinical and demographic data were extracted from subject charts by structured chart review. Characteristics of infected and non-infected patients were compared using t-tests, with Bonferroni correction for multiple comparisons. Results: Infections were identified in 48 (21%) of 226 DKA cases. Skin and soft tissue infections (13), pneumonia (13), and urinary tract infections (13) were most common, including 2 with concurrent pneumonia and urinary tract infections. Less frequently reported were sepsis or bacteremia without a source identified (4), dental abscess (2), pharyngitis (2), bacterial sinusitis (1) Methods: Over an 18-month period, a prospective cohort study of laceration repair was conducted at a teaching hospital (TH) and community hospital (CH). Physicians completed a structured data form and indicated the potential risk for infection as 0, 1, 2, 5, 10, 15, 20, 25, 40 , or >50 percent. Patients were followed to determine whether they had suffered a wound infection. We determined the ability of physicians in training and experienced board certified emergency physicians to predict infection rates by comparing area under ROC curves (AROC) and their sensitivity and specificity to predict wounds with a greater than 5% risk. We also looked at the use of prophylactic antibiotic and experienced physicians' prediction. Objectives: We determined the diagnostic characteristics of a clinical screening tool in combination with early bedside point-of-care lactates at the time of triage in ED patients with suspected sepsis. Methods: Study Design: Prospective, observational study. Setting: Suburban academic ED with annual census of 90,000. Patients and Interventions: A convenience sample of adult ED patients with suspected infection were screened with a sepsis screening tool for the presence of at least one of the following: temperature greater than 38 degrees C or less than 36 degrees C, heart rate greater than 90, respiratory rate greater than 20, or an altered mental status. Patients meeting criteria had POCT lactates measured at triage, which were immediately reported to the treating physicians if greater than 2.0 mmol/L. Measures: Demographic and clinical information including final diagnosis and lactate levels. Outcomes: The criterion standard was presence or absence of sepsis using the ACCP/SCCM consensus conference definitions. Data Analysis: Diagnostic test characteristics were calculated using 2 by 2 tables with their 95% CIs. Results: There were 137 patients screened for sepsis. Their mean (SD) age was 63 (20); 43% were female and 82% were white. Lactate levels were greater than 2.0 mmol/L in 40 (29.2%) patients. Sepsis was confirmed in 100 patients (73%). The diagnostic characteristics (95% CI) of the combined clinical screening tool and POCT lactates were: sensitivity 31% (23-41); specificity 76% (60-87); PPV 78% (63-88); NPV 29% (21-39). Median (IQR) lactate level was 1.4 mmol/L (1-2.2). Patients with sepsis were older (65 vs. 58, P=0.04), had lower SBP (121 vs. 124 mmHg, P=0.001), but similar POCT lactate levels (1.9 vs. 1.5 mmol/L, P=0.60). The combination of a clinical screening tool and early POCT lactates at the time of patient triage has moderate to good specificity but low sensitivity in adult ED patients with suspected sepsis. Chief Objectives: To define the effect of delayed antibiotic administration relative to the clinical identification of severe sepsis using an EHR alert system. Methods: Design: Retrospective analysis of a prospectively compiled registry of patient records from a single-center 300 bed community hospital. Participants: All adult patients with suspected infection over a 3-month period in 2011. A physician order for intravenous antibiotics was used as a surrogate for the clinical suspicion of systemic infection. Application of a comprehensive automated EHR screening tool "CV Alert" to identify severe sepsis patients based on a multi-factor alert system including vitals signs, labs, and treatment team documentation. Outcome: Primary outcome was all-cause in-hospital mortality and a secondary outcome of hospital length of stay (LOS). Analysis: Outcomes were assessed every 12 hours prior to and subsequent to the CV Alert. Results: We identified 2,255 consecutive patients with suspected infection over 3-month period from 23,717 screened (9.5%). CV Alert was triggered in 867 of 2,255 (38%) and was associated with an increased mortality rate (5.3% vs 0.6%, p<0.001) and hospital LOS (5 vs 2 days, p < 0.001) compared to patients not triggering an alert (n=1388). Antibiotics given 0-12 hours after the alert had a significantly increased mortality rate (8.9% vs 3.3%, p<0.002) and longer LOS (6 vs 4 days, p<0.001) compared to patients given antibiotics 0-24 hours prior to alert. Conclusion: Among patients with suspected infection, those identified by the CV Alert had an increased mortality rate and hospital length of stay. Delayed antibiotics relative to the time of CV Alert were associated with a progressive increase in mortality rate and hospital length of stay. An EHR-based screening tool applied to a real-time health care system could aid in the early identification and timely antibiotics administration to severe sepsis patients. Objectives: To quantify the baseline incidence of pneumonia ED visits in the United States (US) among children and adults during the three years prior to introduction of PCV-13 as a routine childhood vaccine in 2010. Methods: Pneumonia ED visits were identified using ICD9-CM diagnosis codes in the Nationwide Emergency Department Sample (NEDS), a 20% stratified sample of all US ED visits and the largest source of US ED data. A pneumonia ED visit was defined as a visit with a primary pneumonia ICD9-CM coded diagnosis or a secondary pneumonia diagnosis with a primary diagnosis reflecting a symptom of pneumonia (e.g. fever) or consequence of pneumonia (e.g. sepsis). Annual rates of all-cause pneumonia ED visits from July 2006 through June 2009 were estimated using US census data as population denominators. Pneumonia ED visit rates were also stratified by month, age group, and geographic region. Results: During the three year study period, 6,917,025 all-cause pneumonia ED visits were identified, representing 2.2% of all US ED visits. During these three consecutive years, defined as July through June of 2006-2007, 2007-2008, and 2008-2009 , pneumonia ED visit rates per 1,000 person-years were 7.4 (95% CI: 7.0 to 7.8), 7.8 (95% CI: 7.3 to 8.2), and 7.6 (95% CI: 7.1 to 8.0), respectively. Rates peaked at the extremes of age and during the winter months (figure). Annualized rates were stable within each month, age group and geographic region. Overall, 39.3% of pneumonia ED visits, including 74.5% of pediatric and 28.1% of adult visits, were managed in the ED without hospitalization. Conclusion: This study demonstrates that the incidence of pneumonia ED visits represents an important contributor to overall pneumonia burden, and provides baseline rates for evaluation of the effect of PCV-13. A relatively large proportion of pneumonia cases diagnosed in the ED are managed in the outpatient setting. Therefore, quantification of ED visit rates provides complementary information to hospitalization rates for the assessment of pneumonia burden. Conclusion: There is a suggested level of protection from influenza infection in individuals with increased levels of influenza-specific antibodies; however, among influenza-infected individuals, higher antibody titers were associated with both the need for hospitalization and longer duration of symptoms among certain H1 antigen types. Influenza-specific antibody titers may serve as both a marker of infection and disease severity in patients presenting to the ED with acute respiratory symptoms. Conclusion: In our ED, younger patients are more likely to accept HIV testing. We identified a significant quality problem with multiple patients not being offered a test despite triage nurses reporting they declined an offer. There are no differences in the sexual orientation of the two groups, but patients with some high-risk behaviors are more likely to accept testing. However, risk perception does not always equate with behavior. Objectives: To determine which ED patients were more likely to decline kiosk-facilitated self-testing by exploring socio-demographic and behavioral factors. We hypothesized that factors associated with declining self-testing included older age (due to less familiarity with computers and kiosks) and high-risk behaviors (due to fear of a possible HIV+ diagnosis). Methods: This opt-in program evaluation studied 332 patients in an inner-city academic ED from 2/2012-4/2012, when a kiosk-based HIV self-testing program was standard of care. The first kiosk in the twostage system registered patients and assessed their interest in screening, while the second kiosk gathered demographic and risk factor information and also provided self-testing instructions. Patients who declined to self-test were offered testing by staff. Broad eligibility included patients aged 18-64 who were not critically ill, Englishspeaking, able to provide informed consent, and registered during HIV program operational hours. Data analyzed using descriptive statistical analysis and chi-square tests. Results: Patients aged 25-29 yr were less likely to decline self-testing (38% vs. 54% p=0.034). Sex, race, education level, prior kiosk experience, average computer use, and chief complaint were not associated with differences in testing rates. Patients with high-risk (sexual or injection drug use) behaviors were less likely to decline selftesting (48% vs. 61% p=0.031). Patients given infectious disease-related ICD9 ED discharge diagnoses were more likely to decline self-testing (63% vs. 49%, p=0.046). EDs are evolving, and now include ED-based kiosk-facilitated selftesting. We determined that patients aged 25-29 yr and patients with high-risk behaviors were less likely to decline self-testing, while those with infectious disease-related diagnoses were more likely to decline self-testing. Further study is needed to identify patients' reasons for declining self-testing and to identify alternative testing methods. Methods: NHAMCS is a four-stage probability sample of all nonfederal US ED visits. We pooled 2007-2010 data, and flagged all visits using previously published diagnostic codes. We generated national estimates and 95%CIs using appropriately weighted analyses, quantified visit frequencies and proportions, stratified visits by whether I&D was performed or not, and evaluated antibiotic prescribing practices. We developed four potential quality measures: 1) use of antibiotics for discharged I&D patients, 2) use of CA-MRSA-active agents for discharged non-I&D patients, 3) non-use of CA-MRSA-active regimens for I&D patients, and 4) use of trimethoprim-sulfamethoxazole monotherapy for non-I&D patients. We assessed performance by US region. Results: Visit frequencies remained stable over time during the study period. Our data suggest that the CA-MRSA epidemic has affected the South disproportionately (table) , with skin infection diagnosed at a higher proportion of visits, and I&D performed at a higher proportion of these. Practices also varied by region, with more aggressive care in the South (table) . Use of non-indicated antibiotics was very common, and was more common in the South. Failure to cover CA-MRSA among I&D patients treated with antibiotics was also common, but less so in the South. We developed four potential quality measures and found non-adherence to be common for all of them. Abscess patients are frequently treated with antibiotics, patients treated non-surgically are frequently covered for CA-MRSA, CA-MRSA coverage is often omitted in antibiotic treatment of abscess patients, and trimethoprimsulfamethoxazole monotherapy is common among patients treated nonsurgically. These may represent opportunities for practice improvement and antibiotic stewardship. Methods: A prospective, randomized controlled study in a large community, academic ED was conducted between 2010 and 2012. Exclusion criteria: age < 18 years, weight > 120 kg, nephrotoxic medications, pregnant/breastfeeding, or creatinine clearance < 50 ml/ min. After informed consent, patients were randomized to receive an initial vancomycin dose of 30 mg/kg or 15 mg/kg followed by three doses of vancomycin 15 mg/kg IV every 12 hours. Adverse events were monitored and serum creatinines (SCr) were collected at baseline and throughout hospital stay. Chi-square statistics were used to compare adverse events between the two groups. Objectives: This study aims to determine whether improved treatment adherence for HCAP in ED and patient outcomes were achieved by utilizing an electronic notification system (ENS) to alert staff when at risk patients presented to the emergency department (ED). We prospectively studied three patient cohorts presenting to a large, academic, tertiary-care ED. Among patients identified with an ENS for severe sepsis or septic shock, we retrospectively identified patients in whom the ED physician suspected pneumonia as a source and with ! 1 HCAP risk factor. The three cohorts were a control group (pre ENS, G1), ENS radio page to the ED pharmacist (G2), and ENS dissemination via a tracking board to the multidisciplinary team (G3). We measured adherence to guideline therapy, and patient outcomes. Statistical analyses were performed by Pearson's chi-square test for comparison of proportions across groups and Kruskal-Wallis for time variables. Results: Of a total of 660 patients with severe sepsis or septic shock in all cohorts, 198 patients were included. Patient demographic characteristics did not differ across groups (GI = 70, G2 = 75, G3 = 53; age = 70.9 AE 16.9 (SD), 54.5% male). Adherence to guideline-appropriate antibiotics in the ED was greater in G2 (24/75, 32%) and G3 (23/53, 43%) vs. G1 (9/70, 13%) (p = 0.0006). The absolute reduction in mean time to first antibiotic administered from ENS was 46 minutes in G2 vs G1, and 26 minutes in G3 vs G1 (p= 0.0194 ). Blood cultures obtained prior to antibiotic administration was unchanged as was the duration of ED care. There were trends toward increased admission rate from the ED to the ICU. There was no difference in length of hospitalization, mortality, or in rates or duration of mechanical ventilation. Conclusion: Use of an automated ENS to identify severe sepsis or septic shock patients in the ED and alert staff was associated with significantly increased adherence to guidelines for empiric HCAP therapy in the ED. Challenges Objectives: To determine if early quantitative resuscitation attenuates organ dysfunction in survivors of septic shock. Methods: Pre-planned, secondary analysis of a large completed randomized controlled trial conducted at an urban emergency department with >100,000 visits. Inclusion criteria: suspected infection, two or more systemic inflammatory response criteria, and either systolic blood pressure <90 mmHg after a fluid bolus or lactate >4 mmol, and survival to hospital discharge. Exclusion criteria: age <18 years, no aggressive care desired, or need for immediate surgery. Clinical and outcomes data were prospectively collected on consecutive eligible patients for 1 year before and 2 years after implementing early quantitative resuscitation as standard care. Patients in the before phase received non-protocolized care (NPC) at attending physician discretion. Survivors who received quantitative resuscitation were compared to survivors who received NPC. The primary outcome was the worst sequential organ failure assessment (SOFA) score during hospitalization. Secondary outcomes were hospital and ICU lengths of stay (LOS). Chi-square and Mann-Whitney U tests were used as appropriate. Results: A total of 301 subjects, 260 in the quantitative resuscitation group and 41 in the NPC group, were included. There were no significant differences in age, race, or sex between the two groups. The initial SOFA score in the quantitative group was 6.1, and in the NPC group at 5.4 (p = 0.17). There was no significant difference in the worst SOFA score during hospitalization between the quantitative resuscitation and NPC groups (6.8 vs 5.7, respectively, p = 0.16). The quantitative resuscitation group did have a significantly shorter ICU LOS (3 vs 2 days, p < 0.004), but there was no difference hospital LOS. Conclusion: In this analysis, among survivors of septic shock we found no difference in maximal organ dysfunction during A prospective, randomized controlled study in a large community, academic ED was conducted between 2010 and 2012. Exclusion criteria: age < 18 years, weight > 120 kg, nephrotoxic medications, pregnant/breastfeeding, or creatinine clearance < 50 ml/ min. After informed consent, patients were randomized to receive an initial vancomycin dose of 30 mg/kg or 15 mg/kg followed by three doses of vancomycin 15 mg/kg IV every 12 hours. Vancomycin trough levels were drawn thirty minutes prior to the second, third, and fourth doses. Intention-to-treat analyses were conducted to compare the percentage of patients in each group reaching therapeutic levels (> 15 mg/L) and those reaching minimal levels to prevent resistance (> 10 mg/ L). Longitudinal data analysis was used to account for within-person correlation. Results: Ninety-nine patients with similar characteristics were enrolled. After 12 hours, a greater percentage of patients reached 15 mg/L in the 30 mg/kg group compared to the 15 mg/kg group (36% vs 3%, respectively). At 24 and 36 hours, a similar proportion of patients reached the 15 mg/L target (21% vs 12% and 18% vs 22%, respectively). Similar results were seen with a 10 mg/L threshold (see table) . Conclusion: This is one of the first studies to prospectively evaluate vancomycin loading doses of 30 mg/kg in the ED. Patients receiving an initial loading dose of 30 mg/kg attain therapeutic levels more rapidly. Therefore we recommend that emergency physicians consider a higher loading dose of vancomycin in accordance with the IDSA guidelines. Results: Fourteen experts were recruited. Five participated in key informant interviews. The Modified Delphi survey was administered five times (four modifications based on the expert's responses). After the fifth round there was at least 82% agreement on each criterion. The final definition included 10 interventions: major surgery, advanced airway, blood products, admission for spinal cord injury, thoracotomy, pericardiocentesis, c-section, intra-cranial pressure monitoring, interventional radiology, and death. An example of a criterion is "An injured patient who receives vascular, neurologic, abdominal, thoracic, pelvic, spine, or limb-conserving surgery (i.e., on a limb that was found to be pulseless distal to the injury prior to surgery) within 24 hours of arrival at a hospital needed the resources of a trauma center." We developed a consensus-based functional goldstandard definition of needing the resources of a trauma center, which may help to standardize field triage research and allow for cross-study comparisons. Objectives: Determine if the number of CT scans an adult blunt trauma patient receives is an independent predictor of mortality after controlling for age and injury severity using Injury Severity Score (ISS). Methods: Using our trauma registry of patients admitted by the trauma service, we conducted a retrospective cohort study of blunt trauma patients presenting for evaluation at our Level I trauma center between January 2008 and June 2011. Exclusions included: age < 18 years, penetrating trauma, transfer from other ED, and records which did not include the ISS score or CT scans received. The total number of CT scans per subject was calculated by giving one "point" for each of the following categories of scans received: CT head, CT cspine, CTA chest, CT abdomen/pelvis for a minimum score of 0 and a maximum score of 4. Logistic regression was used to assess the risk of mortality and the number of CT scans after controlling for age and ISS. Results: A total of 3,337 subjects were included in the analysis. The Background: Unlike mechanical injuries, burns often progress both in depth and size over the first few days after injury. Data from our laboratory demonstrate a spatiotemporal wave of necrosis that continues to spread from the initial burn site for at least 24 hrs, suggesting that this may not be due to passive necrosis alone, whereas apoptosis is only apparent after 24 hrs. Recently an active programmed cell necrosis pathway, necroptosis, has been described in other conditions such as myocardial infarction. Objectives: We hypothesized that necroptosis plays a role in burn injury progression and that treatment with necrostatin-1, an inhibitor of necroptosis, would reduce burn progression. Methods: Using a 150 gm brass comb preheated to 100 degrees Celsius, we created two comb burns (one on each side) consisting of four rectangular burns, separated by three unburned interspaces on both sides of the backs of male Sprague-Dawley rats (300 g). The interspaces represent the ischemic zones surrounding the central necrotic core. Left untreated, these areas undergo necrosis. In the first experiment, 10 rats each were randomized to necrostatin 1.65 mg/kg or DMSO given by intra-peritoneal injection 1 hr after injury. In the second experiment 10 rats each were randomized to two intravenous injections of necrostatin-1 1.65 mg/kg or its vehicle at 1 and 4 hours after injury. The primary outcome was percentage interspaces undergoing necrosis within 7 days of injury. A sample of 10 rats in each study group had 80% power to detect a 20% difference in necrotic interspaces. Objectives: This study will be the first to describe substance use among a trauma population using the ASSIST V.3, and will compare this to the current standard for substance abuse screening in a trauma center. A cross-sectional screening study was conducted with all patients admitted to the Level I trauma center service at Rhode Island Hospital during July and August 2012 who met inclusion criteria (at least 18 years old, proficient in English, medically able). Participants completed the ASSIST, and an aggregate summary of their responses was compared to the results from the same time period for the standard screening test for the inpatient trauma service at this institution, the CAGE questionnaire for alcohol abuse. Results: Of 126 eligible patients, 112 verbally consented to participate in the study. The ASSIST showed that 25.0% of participants needed a brief intervention for alcohol, and that an additional 6.3% needed referral to more intensive therapy. An intervention was indicated for at least one other substance in 30.0% of participants, ranging from 28.6% for marijuana to 0.9% for inhalants. The CAGE questionnaire was positive in only 6.7% of patients. Conclusion: Screening for multiple substances of abuse is relevant in trauma patient populations. The ASSIST may be a more sensitive screening test for alcohol misuse than other screening tests which are designed to identify only those needing referral for intensive treatment. As well, the ASSIST identifies those who misuse any number of substances, in contrast to many other tests which screen only for one individual substance. The intent of the SBI requirement for trauma centers is to take advantage of a teachable moment with an at-risk population, and the ASSIST should be considered for use as the screening tool to further this goal. Objectives: To provide an estimate of potentially erroneous documentation in an EMR in the ED. We reviewed records of patients of an urban academic adult-only ED documenting exclusively in an EMR (Allscripts HealthMatics 7.0) with checkbox exam items. All ED patients during the study period (11/1/10 -2/28/11) were eligible. Using our EMR we identified all visits including documentation of a pelvic exam (PE) or rectal examination (RE), chosen for their relatively clear sets of indications and because their performance requires a deliberate step. We performed explicit and implicit review eliminating records if the chief complaint or diagnoses were among those predetermined to include indications for the exam (abdominal pain, pelvic pain, etc.). For the remainder we reviewed the complete record for evidence supporting an indication. We evaluated inter-rater reliability of reviewers, and our coders reviewed whether questionable items affected the evaluation and management (E/M) code assigned. We report proportions with confidence intervals and the kappa statistic. Conclusion: Patients overwhelmingly indicated that note-taking improved the accuracy of physicians' notes, and that typing while patients were relating their history was not distracting. Patients were largely neutral on the subjects of computer versus handwritten notes and tablet versus standard computers. Among the nearly half of patients who did express an opinion, a majority preferred tablet computers rather than typing on standard keyboards. These findings will guide implementation of tablet computer-based documentation and the development of patient education. Methods: We implemented the NIHSS for mobile devices using the Sencha Touch development environment (Sencha Inc., Redwood City, CA). Physicians were permitted to use smartphones as well as department-provided tablet computers. We performed a before-after cohort study of ED resident charts. Charts were reviewed for presence/ absence of NIHSS and of specific items within the NIHSS using a standardized abstraction tool. Accounting for an adoption phase, all patients with stroke team activation during the study period were included. Because the study focus was resident documentation, we excluded patient visits in which there was no resident chart. The primary outcome was whether any element of the NIHSS was documented on the ED resident chart. Secondary outcome was the number of NIHSS items documented on the ED resident chart. Data were analyzed using Stata version 12.0. Proportions were compared using two-sample tests of proportions and means were compared using two-sample t-tests. Results: In the before phase, 63 consecutive charts between 1/1/2012 and 2/26/2012 were reviewed. In the after phase, 59 consecutive charts between 9/1/2012 and 10/22/2012 were reviewed. The proportions of ED resident charts with NIHSSs documented were 0.42 before and 0.78 after (p=0.002). The average number of NIHSS items on the ED resident charts before and after the intervention were 3.8 and 5.1 respectively (p=0.041). Background: Recombinant activated protein C is protective in animal models of sepsis, stroke, and lung injury, but has a poor side effect profile and has been taken off the market. Thrombomodulin (TM) and the endothelial protein C receptor (EPCR) are endothelial membrane proteins, which partner to bind thrombin and activate protein C at sites of inflammation, ischemia, and thrombosis. Both proteins are lost from the endothelial surface in disease states. We have shown previously that targeted delivery of TM to PECAM-1 on the endothelial surface provides protection in mouse models of lung injury. Recombinant TM targeted to PECAM-1, however, does not effectively partner with endogenous EPCR, limiting its potential as a therapeutic. Objectives: We hypothesized that co-delivery of TM and EPCR to PECAM-1 on the pulmonary endothelium would enhance in vitro activity and therapeutic efficacy in an animal model of sepsis-induced lung injury. Methods: TM and EPCR were fused to targeting moieties derived from two anti-PECAM antibodies, which we previously reported enhance each other's binding. Binding and activity of TM and EPCR fusion proteins were tested on mouse endothelial cells. Fusion proteins were injected intravenously in C57BL/6 mice prior to intratracheal injection of endotoxin. Lung expression of VCAM-1 and E-selectin was measured using qPCR and MIP-2 was quantified in bronchoalveolar lavage (BAL) fluid using a commercial ELISA. Endothelial barrier dysfunction was quantified by measuring transendothelial leak of 125Ilabeled albumin. Results: TM and EPCR fusion proteins target PECAM-1 on endothelial cells and demonstrate collaborative binding. Independent of binding effects, the EPCR fusion protein results in a 50% increase in the activation of protein C by cell-bound TM, an effect blocked by anti-EPCR antibody. Consistent with in vitro results, the combination of TM and EPCR fusion proteins is more protective than either protein alone in a mouse model of sepsis-induced lung injury. Specifically, co-delivery reduces BAL MIP-2 (p < 0.001), VCAM expression (p < 0.001), and transendothelial leak of albumin (p < 0.001) as compared to TM fusion protein alone. Conclusion: Co-targeting of TM and EPCR to the pulmonary endothelium enhances activation of protein C and represents a novel and promising therapeutic strategy for the treatment of diseases involving acute endothelial injury. Objectives: Because these limitations may hinder widespread adoption of hypothermia treatment, and improved approaches to temperature control may enhance adoption, we evaluated a novel esophageal device (similar in size and shape to an orogastric tube, with channels to allow circulation of water from an external heat exchanger) for use in inducing hypothermia. We hypothesized that this device could successfully induce and maintain therapeutic hypothermia (defined as a 4 C reduction from baseline for each animal) in a swine model over 24 hours within a 1 C range of goal temperature. Methods: Five female Yorkshire swine, with mean weight of 65 kg (range 61 kg to 70 kg), were anesthetized with inhalational isoflurane via endotracheal intubation and instrumented. The device was then inserted orally into the esophagus, with placement confirmed via auscultation and suction of gastric contents through the central suction channel. The water channels of the device were then connected to an external chiller (Gaymar MediTherm III), and swine temperature, measured intravascularly, was reduced to goal temperature by setting the chiller to automatic cooling mode. A 24hour cooling protocol was completed before rewarming and recovering the animals. Results: Average baseline temperature for the five animals was 38.6 C (range 38.1 C to 39.0 C). All swine were cooled successfully, with average rate of temperature decrease of 1.3 C/hr (range 1.0 C/hr to 1.9 C/hr). Average deviation from goal temperature was 0.2 C, and no treatment for shivering was necessary during the protocol. Histopathology of esophageal tissue, performed after up to 14 days from recovery, showed no adverse effects from the device. A new esophageal device successfully induced therapeutic hypothermia in large swine. Goal temperature was maintained within a very narrow range, and thermogenic shivering did not occur. These findings from this proof-of-concept study suggest a potentially useful new modality to induce therapeutic hypothermia. Background: Sphingosine 1-phosphate (S1P) is a bioactive lipid that regulates vascular and immune function by binding to its receptors (S1PR1-5). S1PR have become attractive targets for drug development, and in fact, the recent FDA-approved for multiple sclerosis, Fingolimod (FTY720), targets S1PR1, 3, 4 and 5. Objectives: In this study we aimed to investigate the role of S1PR1 in ischemic brain injury. Methods: Transient brain ischemia was induced in mice using the middle cerebral artery occlusion (MCAO) mouse model of stroke. The specific S1PR1 agonist, SEW-2871, Fingolimod and/or ANA-12, a TrkB ligand that prevents BDNF binding to its receptor, TrkB, were administered to mice after reperfusion. For in vitro studies, the mouse brain endothelial cell line bEnd3 was used and primary cortical neurons were isolated from E 16.5 mouse embryos. In order to mimic ischemia-reperfusion injury in vitro, oxygen and glucose deprivation (OGD) studies were conducted. S1PR mRNA levels were measured in mouse brain, brain endothelial cells, and primary cortical neurons by reverse transcription and quantitative PCR analysis (RT-qPCR). Results: S1PR1 is predominantly expressed in brain, brain endothelial cells, and primary cortical neurons, revealed by RT-qPCR. Using the MCAO model, we demonstrated that ischemic brain injury significantly altered mRNA expression profiles of S1P receptors. The specific S1PR1 agonist, SEW-2871, and Fingolimod, given after reperfusion, markedly reduced brain infarction in experimental stroke. Interestingly, we found that activation of S1PR1 by SEW-2871 leads to the increase in BDNF mRNA and protein after MCAO. To investigate the direct involvement of BDNF in S1P1R mediated neuroprotection, we employed ANA-12, a TrkB ligand that prevents BDNF binding to TrkB. We found that ANA-12 abolished the neuroprotective effect of S1PR1 activation in experimental stroke. In addition, SEW-2871 induced BDNF expression in brain endothelial cells and primary cortical neurons in normoxic and oxygen-glucose deprivation condition. Conclusion: Taken together, our data indicate that activation of S1PR1 by SEW-2871 and fingolimod ameliorates ischemic injury in the mouse brain in a BDNF-dependent manner. Therefore, S1PR1 could be a novel therapeutic target to prevent brain injury in stroke patients. Objectives: We sought to determine the diagnostic accuracy of several brief measures of inattention in older ED patients: 1) recite the months of the year backwards from December to July, 2) recite the months of the year backwards from December to January, 3) recite the days of the week backwards, 4) spell the world "LUNCH" backwards, and 5) the vigilance A test. Methods: This was a secondary analysis of a prospective observational study designed to validate a novel delirium assessment for older ED patients. This study was conducted at an academic ED from 7/2009 to 2/2012. English-speaking patients who were 65 years or older and in the ED for less 12 hours were included. Those who were deaf, blind, comatose, or severely demented were excluded. An emergency physician asked the patient to perform the inattention tasks mentioned in the objectives; for the Vigilance A test, a patient squeezed the rater's hand on the letter "A" over a series of 10 letters ("SAVEAHAART"). The reference standard for delirium was a comprehensive psychiatrist assessment using DSM-IV criteria. All assessments were performed independently within 3 hours. Sensitivities and specificities for various cutoffs and areas under the receiver operating characteristic curves (AUC) for each inattention task were calculated. Results: Of the 234 patients enrolled, 25 (11%) were delirious. The median age (IQR) was 74 (69, 79) years old, 106 (45%) were female, 31 (13%) were non-white race. The diagnostic performance for each inattention task using the psychiatrist's DSM-IV assessment as the reference standard can be seen in the table. Conclusion: These brief measures of inattention may be useful for delirium monitoring in the ED with the exception for reciting the days of the week backwards; this task had the lowest AUC. Specificity would have to be sacrificed to achieve adequate sensitivity, but specificity may be improved by concomitantly assessing for the other features of delirium such as altered mental status, altered level of consciousness, disorganized thinking, etc. Results: Mean age was 46 years (SD 20); 58% female; 62% white, 24% Hispanic; 15% reported no-to-some schooling; and 39% reported income less than $15,000. Internet use among older (65+) ED patients was relatively low (24%); but among internet-using older ED patients, regular usage was high (86%). Ownership of computers (89%), cell phones (58%), and tablet devices (8%) varied. E-mail usage among internet-using older patients was high (73%), and a substantial proportion also reported social networking usage (24%). Objectives: To determine the agreement between answers to the six standardized ISAR questions between patients and their self-identified surrogates. Methods: A convenience sample of ED patients aged 65 years and older was approached between June and August, 2012, and excluded if they lived in a nursing home or assisted living center, were critically ill, or could not follow basic commands. The patient and self-identified surrogate answered the ISAR with respect to the patient. Kappa statistics were used to analyze agreement between the two parties. Results: Data were available from 120 dyads of patients and matched surrogates. As seen in the table, there was a range of agreement across questions. Highest agreement (kappa 0.88) related to admissions within the last 6 months and lowest agreement (kappa 0.23) related to vision. reporting deficits using the ISAR screening tool, calling into question the reliability of self-reported markers of frailty. Future work should help identify objective tests to identify patients at risk for adverse outcomes after ED care. Conclusion: Implementation of a Geriatric ED coincided with significant improvement in patient satisfaction regarding personal and family aspects of care, as well as perceptions of ancillary testing. These may be due to the structural enhancements built into the Geriatric ED, as geriatric patient satisfaction with nurses and doctors, and satisfaction among those <65 yo, did not significantly change during this period. Results: Innovative Rasch analysis was performed due to its superior psychometric properties. Questions were selected as a good fit based on having infit and outfit statistics greater than 0.4 and less than 1.6. The measure was broken down into five subcategories including physical disability, cognitive disability, stress, depression, and isolation. For each subcategory a screening question was selected which then triggers additional questions. Each category receives an integer score between one and five with one being the most disabled and five being the least disabled. A global health score was then created by combining the five subcategories. Each of the subcategories was analyzed using the same infit and outfit statistics which showed good agreement between the subcategories and the global score. Conclusion: By using Rasch analysis we are able to reduce the disability assessment from hundreds of variables to five screening and 15 follow-up questions for a brief assessment named the ED GRAY (Geriatric Readmission Assessment at Yale) with a simple output score (figure) that can be quickly performed by providers in the ED to initially assess disability in geriatric patients and orient emergency providers to issues that need to be resolved. Conclusion: In regard to our primary outcome there is a statistically significant decrease in ED utilization after the follow-up phone call from a nurse. Our intervention appears to be successful in reducing ED utilization, but further studies will be needed to confirm this result and determine effects on patient care. Objectives: To compare EMP time to task completion and cognitive load when medication orders are placed using chief complaint based versus pharmacological classification versus free text search grouping. We conducted a randomized cross-over trial in an academic tertiary care emergency department simulating the clinical work environment at a work station. Thirty clinicians were randomized to medication order entry using one of the three modes: chief complaint based (e.g. chest pain, headache), pharmacological category (e.g. adrenergic agents, antihistamine), and free text search. Nine simulated patient scenarios were developed to test the hypothesis. The main outcomes were time to task completion and provider cognitive load, as measured by the validated NASA-task load index. We compared time to task completion and provider cognitive load between modes of medication order entry using the Wilcoxon rank-sum test. Conclusion: CPOE using chief complaint based grouping is substantially more efficient and requires less cognitive effort than medication ordering using pharmacologic groupings or free text search. Chief complaint based medication grouping for CPOE in emergency medicine has potential to increase clinical efficiency and potentially limit increased length of stay associated with implementation of an electronic health record. Conclusion: Broader coverage for black adolescents was associated with higher outpatient and ED visit rates over time, while for white adolescents there were no significant changes over time. This led to a diminished disparity in the outpatient and divergent ED trends for black vs. white adolescents in the CHIP era. These findings suggest improved access to outpatient services for black adolescents with more broadly available coverage, but also an increasing demand for ED services in the same group. These findings have implications for the role of initiatives to reduce health care disparities, especially those intended to reduce emergency department utilization. Does The "Invisible Hand" Optimally Regionalize Acute Care Providers? Ari B. Friedman, Guy David, and Brendan G. Carr University of Pennsylvania, Philadelphia, PA Background: As emergency departments (EDs) and primary care clinics (PCCs) have decreased in availability, retail clinics (RCs) and urgent care clinics (UCCs) have developed. RCs treat a limited set of conditions and UCCs treat more severe conditions requiring diagnostics and interventions. These for-profit entrants into the acute care market have the potential to relieve ED crowding by adding capacity. However, it is unknown whether market forces will cause them to locate where they are most needed. Objectives: We sought to determine what type of acute care markets these innovators were most likely to enter. We hypothesized that RCs and UCCs would be more likely to enter markets with higher income and lower uninsurance. Additionally, we explored the possibility that, because they represent a less expensive alternative to ED care when paying cash, RCs and UCCs might locate in areas with higher income but higher uninsurance. Methods: This is a national observational ecological study. Clinic locations were obtained via "webscraping" (RCs), from the Urgent Care Association of America (UCCs), the American Hospital Association database (EDs), and SK&A Healthcare Databases (PCCs). The Area Resource File supplied county covariates. We used negative binomial regression to examine the relationship between counts of each type of clinic per county and uninsurance, while controlling for county income, geographic area, population density, physician density, mid-level practitioner density, and percent black, Hispanic, and elderly. Conclusion: TExT-MED is a low cost, highly scalable system, but its use did not result in a decrease in hospital admissions. Although there was a trend towards a decrease in ED visits using TExT-MED, due to the small sample size, the change in ED visits did not reach statistical significance. Methods: Prospectively collected continuous quality improvement (CQI) data for orotracheal intubations performed in an academic emergency department over a 5-year period were retrospectively analyzed. Following each intubation, the EM resident completed a data form regarding multiple aspects of the intubation, including the occurrence of esophageal intubation and the device(s) utilized. Patients in whom intubation was attempted using a direct laryngoscope (DL), GlideScope (GVL), or CMAC were included in the analysis. Results: A DL was utilized in 1,230 patients, and in 76 (6.2%, 95% CI: 4.9% to 7.7%) of these patients the tube was incorrectly placed in the esophagus. A video laryngoscope (VL), either the GVL or CMAC, was used in 1,239 patients, and the tube was incorrectly placed in the esophagus in 15 (1.2%, 95% CI: 0.68% to 2.0%) of these patients. All esophageal intubations were immediately recognized and tube placement was corrected. In this academic emergency department where supervised EM residents perform intubations, use of a VL was associated with a greater than five-fold reduction in the incidence of esophageal intubations when compared to using a DL. This difference may be attributable to the supervising physician's ability to directly visualize the airway and provide instantaneous feedback to the EM resident during the procedure, which is not possible when a DL is used. Objectives: To determine which type of intubating device was the most successful and resulted in the fastest intubation for novice intubators. Methods: Thirty-one undergraduate students without prior intubation experience reviewed an anatomical model and instructional videos for direct laryngoscopy, video laryngoscopy, and S.A.L.T. In a randomized order each subject was allowed up to three attempts with each device on manikins. Time and number of attempts to success in intubation were recorded. Subjective evaluation of the ease of use for each device was also recorded using a 1-5 scale. A chi-square test was performed to test whether the success rates were different between the devices. The average number of attempts, average time to success, and subjective evaluation of the ease of use were compared using ANOVA analysis. Pairwise comparison within each ANOVA analysis was performed using Bonferroni correction to account for multiple comparisons. Results: Video laryngoscopy had the highest success rate but took longest to perform. The chi-square test for the success rate had a pvalue of 0.004, indicating that the difference in the success rate was statistically significant. The one-way ANOVA of the number of attempts did not show statistically significant differences (p=0.087). The one-way ANOVA of the time to successful intubation had a p-value of <0.001. A pairwise comparison with Bonferroni correction identified that the difference between video laryngoscopy and S.A.L.T was statistically significant (p<0.001). The other two pairwise comparisons were not statistically significant. No statistically significant difference was identified for the ease of using the devices. Objectives: We sought to investigate the effect of VL on resident physician endotracheal intubation timeliness during simulated intubation/CPR situations. Methods: EM residents participated in a prospective, observational trial of the time to successful intubation using high fidelity simulation mannequins. Residents performed both DL and VL during a simulated CPR case; the order in which each method was used was randomly assigned. Investigators recorded the number and duration of interruptions, start and stop times, and the time until successful endotracheal intubation. Participants were blinded to the study purpose until after the investigation was complete. Statistical comparisons were made using the t-test for paired samples. Results: Fourteen residents participated in the study. The mean time to successful intubation using DL was 53.4 sec (SD 44.5, 95% CI: 27.7-79.1 sec) while the mean using VL was 24.6 sec (SD 8.4, 95% CI: 19.7-29.4 sec), a statistically significant difference (t = 2.635, df = 13, p = 0.021). DL procedures incurred a mean of 1.14 interruptions (SD 1.10, 95% CI: 0.51-1.78 interruptions) while VL procedures had 0.21 (SD 0.43, 95% CI: -0.03-0.46 interruptions), a difference that was also significant (t = 3.484, df = 13, p = 0.004). The mean total interruption duration was 0.24 min (SD 0.29, 95%CI: 0.07 to 0.41 min) for DL procedures and 0.02 (SD 0.04, 95% CI: -0.01 to 0.05 min) for VL procedures, again a significant difference (t = 2.918, df = 13, p = 0.012). Conclusion: In this cohort, time to successful intubation was significantly shorter using VL than it was using DL. The number of interruptions during the intubation process was fewer and the duration of the interruptions was shorter for VL as compared to DL. Background: Laryngoscopy is a complex technical skill with a low success rate among novices. Video laryngoscopy (VL) has emerged as a critical tool in the "difficult airway" armamentarium of EM physicians with a resultant increase in the types of commercially available VL devices. Training residents in VL has become increasingly challenging for EM residency (EMR) programs secondary to the array of devices. Additionally, it is unclear how much VL training should be provided as it is unknown how prevalent VL devices are in the community. Since EM residents go on to work in diverse settings, many in community (non-EMR) EDs, it is preferable that they receive training on the airway modalities they will encounter in practice. Objectives: To compare the prevalence and type of VL devices in EMR programs compared to non-EMR EDs. We hypothesize that a higher percentage of EMRs employ VL in their EDs as compared to non-EMR EDs and that the types of VL devices vary between these two practice environments. Methods: This was a survey study conducted from July 2012 to October 2012. ACGME accredited, MD EMR programs in the US were sent electronic surveys regarding VL in their program. Non-EMR EDs in NY State were contacted by phone with the same survey. Non-EMR EDs were contacted by phone because e-mail addresses were not available. A chi-square test was performed to determine whether the difference in VL prevalence was significant. Results: 158 EMR programs and 132 non-EMR EDs were surveyed. 138 MD EMR programs (87.3%) responded and 135 of these (97.8%) reported having some form of VL in their EDs. 121 non-EMR EDs (91.7%) participated and 102 (84.3%) reported having VL. The difference in proportion of EMR versus non-EMR EDs that have VL was chisquare = 13 (p < 0.001). The Glidescopeâ device was present in 90.4% of EMR programs and 94.1% of non-EMR EDs with VL. Additionally, 25.2% of EMR programs trained their residents with multiple VL devices. Conclusion: The majority of EMR programs trained residents in VL. The Glidescopeâ device was used most frequently. Non-EMR EDs in New York State had a lower presence of VL devices, with the Glidescopeâ device again being the most common. These results demonstrate that VL is pervasive in both practice environments. Residents should be encouraged to become proficient with VL as they will likely encounter this in their practice. Objectives: To determine the effect of a bloody airway on the first pass success rate when using videolaryngoscopes in the emergency department setting. Methods: Data were prospectively collected on all patients intubated in an academic Level I trauma center from July 2007 to June 2012. Following each intubation, the intubating physician completed a standardized form which included the device used, number of attempts, and presence or absence of blood in the airway. The devices studied included the GlideScope and CMAC. The main outcome measured was first attempt success rate. 95% confidence intervals were calculated. can be challenging. Patients often have anatomic and physiologic characteristics that make intubation particularly risky. Accordingly, first-attempt success is highly desirable, as multiple attempts may be poorly tolerated and increase the chance of failure and adverse events. Objectives: Study hypothesis: Compared to direct laryngoscopy (DL), video laryngoscopy (VL) with the GlideScope and CMAC will improve first-attempt success rates and laryngoscopic visualization during endotracheal intubation in the ICU. Methods: Prospective QI registry of ICU airway management in the 20+ bed medical ICU of a 450+ bed university medical center and the 12 bed mixed ICU of a university-affiliated community hospital. All intubations were performed by pulmonary/critical care, critical care medicine, emergency medicine, or family medicine services. After each episode of endotracheal intubation in the ICU, clinical staff completed a standardized QI form that included patient demographics, clinical data, predictors of difficult airway (DAPs), methods of airway management, medications, outcomes, and complications. Methods: We performed a planned subanalysis of a large, prospective, multi-center observational study of children (<18 years old) with blunt torso trauma. We included those patients who underwent abdominal CT scan with IV contrast. Abdominal CT scans were obtained at the discretion of the emergency department (ED) treating physicians with or without the addition of OC based on the participating centers' guidelines and clinician/radiologist discretion. Abdominal CT results were based on the interpretations of each sites' faculty radiologists. All patients were followed up to identify those with IAI. Abdominal CT scans were considered abnormal if a specific IAI was present or findings suggestive of IAI were identified. Objectives: To determine the reliability of abdominal findings for detecting children with intra-abdominal injuries (IAI) after blunt torso trauma. Methods: This is a planned analysis of a prospective, multicenter observational study of children (<18 years) with blunt torso trauma. Complaints of abdominal pain, the presence and degree of abdominal tenderness, and the initial Glasgow Coma Scale (GCS) score were recorded prior to diagnostic imaging (if obtained). We excluded patients with GCS < 13 and those 2 years of age for the complaint of abdominal pain. We calculated the sensitivity of abdominal findings for IAI with 95% confidence intervals (CI). We examined the association of isolated abdominal pain/ tenderness (i.e. patients with no other variables from a previously derived clinical prediction rule) with IAI and IAI requiring acute intervention (therapeutic laparotomy, angiographic embolization, blood transfusion, or ! 2 nights of IV fluid for gastrointestinal/pancreatic injuries). Results: Of the 12,044 patients in the main study, 11,277 (94%) had GCS scores ! 13; the median age was 11.3 (IQR: 6.1-15.1) years. Sensitivities of abdominal findings for IAI are presented below. In those with GCS scores of 15, the relative risk of IAI increased as the degree of abdominal tenderness increased: mild 3.0 (95% CI 2.3, 4.0), moderate 9.4 (95% CI 7.6, 11.6), and severe 19.4 (95% CI 15.4, 24.4). For those with isolated abdominal pain/ tenderness, 155/2,117 (7%, 95% CI 6, 9) had IAI and 20/2,117 (1%, 95% CI 0.6, 1.5) had IAI undergoing acute intervention. The risk of IAI increases as degree of abdominal tenderness increases in children with blunt torso trauma and normal mental status. The reliability of the abdominal exam, however, decreases as the GCS decreases. For children with isolated abdominal pain or tenderness, additional diagnostic evaluation is warranted. Objectives: To describe prehospital LSIs performed, incorrectly performed LSIs, and missed LSIs (procedures that are indicated but not performed) in pediatric patients in a combat setting. In this IRB-approved study, we collected LSIs on patients arriving to six combat hospitals from the field, treated by any provider type, and of any nationality. Military special interest patients were excluded. Trained site investigators evaluated patients on arrival and recorded demographics, vital signs, LSI performed, if LSI was performed correctly, and if LSI was missed. LSIs were predefined and included airway, thoracic, extremity, and vascular access procedures, and resuscitation techniques. From the large dataset, we analyzed LSIs performed on pediatric patients. Chi-square and Fisher's exact tests were used to compare incidence of LSIs between groups. Two-sample Poisson tests were used to compare number of LSIs per patient between groups. Significance was p < 0.05. Results: Out of 1491 patients, 119 pediatric patients (8.5%, CI 7-10%) arrived to a combat hospital from the field: 81% male, mean age 10 yrs (range 1-17 yrs), and 95% local national. 44% of patients were blast injury, 26% penetrating, 24% blunt, and 7% burn. Most common pediatric injuries were head 26%, lower extremity 21%, upper extremity 16%, abdomen 11%, and thoracic 9%. In the pediatric group, 1.2 LSIs were performed per patient (244 LSIs/119 patients), and adults received 1.95 LSIs/patient, p<0.001. The most common pediatric LSIs were hypothermia prevention 30%, vascular access 25%, fluids 16%, and pressure packing 15%. Adults received more tourniquets, p<0.03. 4% of children received an airway device, 1 needle thoracostomy, 1 chest tube, no cricothyrotomies. Most common missed LSIs were vascular access, hypothermia prevention, intubation, and pressure packing. 4% (CI 2.1%-7.5%) of LSIs were performed incorrectly, mostly commonly vascular access. Conclusion: In a combat setting, pediatric patients who arrived to a combat hospital had 96% of prehospital LSIs performed correctly. Children had more LSIs per patient than adults and missed LSIs occurred more frequently. Objectives: In preparation for a future clinical trial of progesterone for serious TBI, the PECARN conducted a prospective observational study of children with TBI, to determine patient characteristics and identify patient and guardian arrival times. Glasgow Coma Scale (GCS) scores of 3-12 at 16 pediatric emergency departments (EDs) over 6 months. We collected data on variables thought to be important to determine eligibility and accrual for a future trial, including: patient demographics; ED GCS scores; presence of hypotension and/or hypoxia; history of prehospital cardiac arrest; nonsurvivable injury; and death in the ED. We recorded time of arrival of patient and legal guardian at the treating ED. We considered guardian arrival within 3 hours critical for obtaining written informed consent. Results: We enrolled 295 children with head trauma and GCS scores of 3-12, including 215 (73%) with GCS scores of 3-8. Annualized total ED patient volume was 1 million visits. Median age was 6.5 years. 148 (50%) were transferred from another ED. Enrolled patients had the following important characteristics when considering a future therapeutic trial: 35 (12%) had hypotension; 11 (4%) had hypoxia; 35 (12%) had prehospital cardiac arrest with resuscitation; 32 (11%) had non-survivable injuries; and 15 (5%) died in the ED. Most children arrived within 2-3 hours of injury. Cumulative arrival times of legal guardians to the treating ED after injury was: 59 (21%) within 1 hour; 112 (39%) within 2 hours; 145 (51%) within 3 hours; 183 (64%) within 4 hours. We identified important clinical characteristics of potentially eligible patients for a future multicenter trial of progesterone for serious pediatric TBI, and patient and guardian arrival times. Only one-half of guardians arrived in the ED within 3 hours of injury. Enrolling children into a future trial of progesterone for TBI poses challenges regarding timing of patient and guardian arrival, and will likely require EFIC. Objectives: The purpose of our study was to evaluate for differences when an EM provider is presented with two simulated patients with toxicological emergencies. We hypothesized there are objective differences in care provided to adult and pediatric populations. Methods: In this randomized, cross-over study, participants managed two cases individually. Half were exposed to an adult case first (digoxin overdose); the other half a pediatric case first (beta blocker overdose). Both scenarios had nearly identical stems and followed similar courses. Both patients were altered, hypotensive, and bradycardic. Participants were evaluated using a checklist based on history of present illness (HPI), exam, initial management, data acquisition, treatment, and disposition. All participants completed pre/ post-session surveys indicating self-assessments and age preference. Performance was evaluated using Sign rank and McNemar's tests. Time-to-event analysis was done using Kaplan-Meier principles. Lastly, subjective preference was compared to objective performance. Results: Case order showed no significant difference. HPI and disposition scores were better in the pediatric case but treatment scores were better in the adult case. Five critical actions (ordering EKG, giving atropine, voicing a diagnosis, checking a bedside glucose, and ordering a chem panel) showed time-to performance differences, all earlier in the adult case (p<0.05). In both pre/post-session surveys, 75% and 97% (respectively) of residents stated they were more comfortable treating adult toxicity cases. Only 34% performed better in the adult case (p<0.05). 9% stated they were equally comfortable managing pediatric and adult toxicities. Conclusion: This study identifies a discrepancy in management between pediatric and adult simulated toxicological scenarios by EM residents. These findings merit further investigation with regards to generalizability to other institutions and practicing physicians. This can also serve as the first step in a needs assessment to identify gaps in curricular design for residency as well as future maintenance of certification programs. Background: Trauma resuscitations require the rapid formation of an ad hoc medical-surgical team that must perform efficiently but rarely interacts outside of the trauma bay. Objectives: Assess effectiveness of a simulation-based educational intervention teaching non-technical (teamwork) skills utilizing crisis resource management (CRM) principles in an in situ environment. Methods: Instruction was based upon principles of CRM adapted from the Agency for Healthcare Research and Quality's TeamSTEPPS© course. 55 volunteers (attending EM and trauma surgery physicians, fellows, residents, nurses, and technicians) participated in one of eight sessions over 4 months. Volunteers reviewed CRM educational materials prior to the simulations. Sessions were three-part: in situ simulation-based resuscitation that was video taped, intervention including video review and debrief, followed by a second resuscitation with bedside debrief. Participants completed the TeamSTEPPS Teamwork Perception Questionnaire (T-TPQ) and a self-efficacy survey. Aggregate and subgroup scores were compared pre/post intervention with two-tailed paired sample t-tests and independent samples t-tests (sub-groups) (SPSS). Results: Aggregate data showed significantly higher postintervention scores on T-TPQ and self-efficacy surveys (all p values <0.05). The largest improvements (T-TPQ) were team structure (3.2, 95% CI 2.3-4.1) and mutual support (3.2, 95% CI 2.2-4.2); smallest was communication (1.5, 95% CI 0.4-2.6). The largest improvement (selfefficacy survey) was ability to debrief (1.45, 95% CI 0.96-1.95); the smallest was role performance (0.54, 95% CI 0.3-0.78). The majority of participants (87%) believed teamwork will improve. Non-physicians scored higher on T-TPQ and self-efficacy pre/post intervention. Using simulation-based instruction in the principles of CRM, we demonstrated improved perception of teamwork skills among a multidisciplinary trauma resuscitation team. Post-intervention scores improved in all core elements. Both physician and non-physician perceptions improved. The Effects of Expressive Writing on Medical Student Anxiety and Performance Anne K. Merritt, James W. Bonz, Emily B. Ansell, Kelly L. Dodge, James D. Dziura, and Leigh V. Evans Yale University School of Medicine, New Haven, CT Background: Expressive writing has a wide range of benefits on physical and emotional health. Studies demonstrate that it can improve immune function, reduce blood pressure, and lead to fewer doctor visits. Medical school is a particularly stressful time for students and is associated with high rates of anxiety and depression. Writing might be an effective tool to improve performance and reduce anxiety in this population. Objectives: To determine if a 5-minute expressive writing intervention reduces students' anxiety levels and improves performance as team leader during a 15-minute medical simulation scenario. Methods: This was a prospective, randomized controlled crossover study in which third-year medical students participated in 24 simulation scenarios and acted as the team leader three times over a 3-month period. Students were randomized to two groups: Group 1 performed the writing exercise before the first scenario they led, and Group 2 performed it before the second scenario. Faculty coordinators and resident facilitators were observers who were blinded to the intervention. A mixed model analysis was used to evaluate the effect of the writing exercise on three outcome measures: students' self-reported anxiety levels at three time points and observer-reported anxiety and performance levels. Results: From June 2011 to May 2012, 101 third-year Yale medical students participated in the simulation course; one student who performed the writing exercise twice was excluded (N=100). The mean differences in self-reported anxiety levels on arrival to the study site, immediately prior to the scenario, and immediately after the scenario when sitting quietly as compared to writing were 0.14 (95% CI: -0.35, 0.64), -0.05 (95% CI: -0.55, 0.45), and 0.08 (95% CI: -0.42, 4.57), respectively. Observer-reported anxiety levels on a 10-point scale (1 no anxiety, 10 very anxious) were 4.24 (95% CI: 3.94, 4.55) when students sat quietly and 4.41 (95% CI: 4.11, 4.71) when students wrote. Observerreported performance levels on a 10-point scale (1 poor, 10 very well) were 6.66 (95% CI: 6.33, 6.99) and 6.50 (95% CI: 6.17, 6.84), respectively. The writing intervention did not have a significant effect on student performance or anxiety. The benefits of team interaction when students did not write prior to the scenario likely outweighed any possible benefits of expressive writing in this study. short structured contacts, and it is known to predict medical school success better than the traditional, unstructured interview and application materials. Its utility in EM residency selection is unknown. Objectives: We theorized the MMI would better predict PGY-1 performance than other traditional elements in the EM residency application. Methods: Drawing from three EM residency programs, 71 interns during their first month of training completed an eight-station MMI developed to focus on desirable EM characteristics. End of PGY-1 year assessments of global performance and six sub-categories covering many of the ACGME core competencies were obtained along with data about disciplinary actions or concerns, conference attendance, and intraining examination scores. Linear regression modeling was performed to evaluate the predictive value of the MMI. Results: Score on the MMI is predictive of overall performance (F (1, 70) =5.29, p=0.02) as well as borderline predictive of the participant's year-end quartile rank within his or her class (F (1, 70) =4.14, p=0.05), but did not predict performance in subcategories such as communication skills or professionalism. The MMI did not predict presence of a disciplinary action or significant concern during the PGY-1 year. Performance on the individual stations was generally not predictive of overall performance; however, performance on the two stations involving role-playing did predict performance. (F (1, 69) =5.39, p=0.02). The match desirability rating also did not predict MMI performance or overall performance during the PGY-1 year. Conclusion: MMI performance predicts 5-10% of the variation in PGY-1 performance. The predictive value of the MMI is small but it is additive to professional judgment (i.e. match desirability rating). This suggests the MMI is identifying unique elements which are not identified by traditional measures. We do not appear to identify strengths in communication and professionalism. Nor does the MMI identify risk for performance concerns; however, category heterogeneity may limit sensitivity. The role-playing stations appear to be predictive of overall performance and may with additional research present a means to integrate these concepts on a smaller scale into applicant assessment. Deliberate Practice for the Development of Expert Performance in Basic Cardiopulmonary Resuscitation David Scordino, Nicole Shilkofski, Elizabeth Hunt, and Julianna Jung Johns Hopkins University School of Medicine, Baltimore, MD Background: Simulation has been shown to be effective for teaching resuscitation skills, though optimal techniques for maximizing learner performance have not been defined. Expert opinion suggests that deliberate practice is a key feature of effective simulation, though the best way to implement deliberate practice in this setting has not been described. Objectives: To compare the resuscitation performance of students trained using traditional simulation techniques to those trained using a deliberate practice paradigm. Methods: Both control and intervention groups consisted of 120 BLS-certified second-year medical students participating in a one-hour simulation-based session focused on quality CPR and defibrillation. The control session consisted of a high-fidelity cardiac arrest scenario followed by a traditional debriefing using the "good judgment" model, after which the scenario was repeated and debriefed again. The DP session was conducted using a deliberate practice technique, wherein students completed the arrest scenario followed by a short debriefing and didactic, after which they repeated the scenario several times until they were able to do it perfectly, rotating team roles each time. Each participant was then individually assessed in a different high-fidelity cardiac arrest scenario, and performance was graded using a standardized checklist focused on completion and timing of key interventions. Performance was compared between groups using chisquare analysis. Results: Students in the DP group were more likely to call for help promptly (77% within one minute vs. 55% of controls, p=0.0013). DP students were also more likely to initiate CPR rapidly (75% within one minute vs. 58% of controls, p=0.0115). There was no difference in defibrillation times between groups (47% vs. 40% within 3 minutes, p=0.287). Student satisfaction was comparable in both groups, with the session rated as "very useful" by 98% of control students and 96% of DP students. Conclusion: Incorporating a deliberate practice paradigm into simulation-based training was successful at improving the timeliness of some key resuscitation interventions, though this brief intervention did not enable learners to achieve expert performance standards. The technique was nonetheless effective for key outcome measures, and may help optimize the effectiveness of simulation in other areas as well. The Objectives: To determine adolescents' willingness to be tested for sexually transmitted infections, assess their levels of STI knowledge, and determine their preferred modes of obtaining STI information in an urban emergency department (ED). Methods: A survey of a prospective convenience sample of innercity adolescent patients was taken from the adult and pediatric ED of an urban hospital in the Bronx from 7/2012 to 10/2012. Eligible adolescents between the ages of 15 and 24 were recruited. They completed a questionnaire that sought information about their demographics, risk factors, and STI knowledge, and their preferred mode of obtaining STI information. Results: The average age of participants were 20.4 AE 2.5 years old (n=185). Out of those who completed the survey, 41.1% were male, 56.8% were Hispanic, and 51.3% were African American. When asked if they would test for an STI, even if they had no symptoms, 87.9% said yes to chlamydia, 86.3% said yes to gonorrhea, 93.1% of females said yes to HPV, and 50.7% of males and 73.1% of females said yes to an anal pap smear. The results of the knowledge questions showed that the participants were more knowledgeable about chlamydia and gonorrhea than they were about HPV. When asked how they would like to receive information about STI, the highest responses were text messages at 30.6% and internet sources at 30.0%. About a quarter (24.0%; 41/171) tested positive previously for chlamydia, and over a tenth (13.5%; 23/170) for HPV. 37.6% (68/181) never tested for chlamydia, gonorrhea, or HPV. A high percentage of the sample population in the ED was willing to receive testing for chlamydia, gonorrhea and HPV, even if they showed no symptoms. The participants demonstrated strong knowledge of chlamydia and gonorrhea, but not of HPV. Objectives: The present study tested a new web-based program to facilitate tobacco SBIRT in ED settings called the Health Evaluation and Referral Assistant (HERA). We hypothesized that, compared to the control condition, the HERA, which provides a personalized feedback report and referrals based on insurance status, home address, and clinical severity, would result in greater treatment initiation over the 90 days post-visit. Methods: Subjects were recruited from four EDs. Consented subjects were randomized to the either the control group (assessment only, standard referral list) or the experimental group (assessment, individualized feedback reports and the option for a faxed referral). We conducted Pearson's chi-square tests comparing the two groups on whether contact was made with a tobacco treatment provider and whether that contact resulted in an initial clinical evaluation (i.e., treatment initiation). Participants were assessed at 1 and 3 months from the date of enrollment. to have a positive influence on ED workflow and patient throughput. A number of studies have demonstrated positive effects including reductions in medication errors, test turn-around times, and ED length of stay. One potential benefit of CPOE is a reduction in the time until orders are entered due to eliminating a clerical "middle man" and the potential for bedside order entry. On the other hand, there is concern that with the expanded use of order sets, CPOE may lead to an increase in resource utilization due to an increase in the number of diagnostic tests performed per patient. Improved knowledge of potential effects of CPOE on multiple ED metrics can allow for better understanding of the overall effects on ED patient throughput. Objectives: Examine ED metrics before and after CPOE implementation and determine whether CPOE implementation decreases time to order entry and increases diagnostic tests per patient. We conducted a retrospective review of ED metrics before and after CPOE implementation at our urban, Level I trauma center. Pre-implementation metrics were measured from July until November, 2011, and compared to post-implementation metrics from July until November 2012. ED throughput metrics including door to provider time, time to first orders, time to disposition, and overall length of stay were calculated. Utilization data including labs and imaging studies ordered per patient were also measured. Results: Data from a total of 40,093 patient visits were analyzed (see table) . 19,793 visits occurred during the pre-CPOE period and 20,300 during the post-CPOE period (2.6% increase). Door to provider time and time to first order entry increased by seven (11.9%) and six (5.5%) minutes, respectively. Average number of lab tests ordered per patient increased by 13.8% and number of imaging tests per patient increased by 7.8%. Time until disposition increased by 26 minutes (12.8%). CPOE implementation had no effect on ED length of stay. Conclusion: ED throughput metrics including time to first order entry and time to disposition increased after CPOE implementation. Utilization of lab and imaging tests increased after CPOE implementation. Overall length of stay was essentially unchanged. Negative consequences of CPOE should be considered when measuring the overall benefit of electronic information systems. Background: While clinical decision support (CDS) has been shown to decrease the overall use of CT pulmonary angiography (CTPA) in ED patients with suspected pulmonary embolism (PE), its effect on physician adherence to evidence-based guidelines is yet unknown. Objectives: To determine the effect of CDS on physician adherence to evidence-based guidelines for use of CTPA in ED patients with suspected PE. Our primary outcome was adherence to evidence-based guidelines as determined by explicit chart review. Our secondary outcome was adherence to evidence-based guidelines as determined by implicit chart review performed by three attending physicians (two emergency physicians and one internist). We hypothesized that both explicit and implicit adherence to evidence-based guidelines would increase with CDS implementation. Methods: This prospective study was performed in our ED, located in a 777-bed, urban academic Level I trauma center. We evaluated the 12-month periods prior and subsequent to the quarter during which a second-generation CDS, based on Well's criteria, was implemented. We reviewed the electronic patient record to determine adherence to Well's criteria and D-dimer testing when appropriate (either documented explicitly or inferred implicitly by the attending investigators). Two hundred random records were reviewed (100 pre and post) based on a sample size calculation to detect a 20% effect size with a power of 0.8 (alpha = 0.05) and an estimated baseline proportion of 70%). We used chi-square tests with proportional analyses to assess pre-and postintervention differences. Results: A total of 1,155 patients with suspected PE were evaluated by CTPA during the 12 month period prior to the implementation of the CDS (9/1/09-8/31/10), and 1,292 patients in the 12 months (12/1/10-11/ 30/11) following the quarter during which CDS was rolled out. Subsequent to CDS implementation, adherence to evidence-based guidelines increased from 55% to 73% (p=0.008) based on explicit chart review and from 58% to 74% (p=0.017) according to implicit chart review of electronic patient records. Conclusion: Implementation of clinical decision support significantly increases adherence to evidence-based guidelines for the use of CT in ED patients with suspected PE. However, even with CDS, nearly one quarter of CTPA studies performed deviated from evidencebased guidelines. Automated Outcome Classification of Emergency Department CT Imaging Reports Kabir Yadav, Efsun Sarioglu, Meaghan Smith, and Hyeong-Ah Choi The George Washington University, Washington, DC Background: Reliably abstracting outcomes from free text electronic medical records remains a challenge. While automated classification of free text has been a popular medical informatics topic, its evaluation in real-world clinical settings has been limited. The two main approaches are linguistic (natural language processing) and statistical (machine learning). We have developed a hybrid system for abstracting CT reports for specified outcomes. in these phenotypic differences, and have been associated with differential rates of repeat sepsis and mortality. Objectives: To externally validate if previously identified IL-10 1082A, IL-10 592C, 734G, 3367G (CGG halotype), Protein C 1641G, Protein C 1654C, and CD 14 260T are associated with episodes of repeat sepsis, severity of illness, and mortality in emergency department patients with severe sepsis. Methods: Whole blood was collected in the PAXGENE collection system from patients enrolled at one site (n = 192) during a large multicenter clinical trial comparing two sepsis resuscitation strategies. The electronic medical records of the 36-hospital system encompassing three states were searched for all enrolled subjects for 2 years following enrollment into the clinical trial. All admissions were reviewed by a single author to determine if the patient was readmitted with a diagnosis sepsis, or developed sepsis during any hospitalization. Repeat sepsis patients were then matched by sex, race, and age with controls who had only the single index episode of sepsis. The specific SNPs investigated were chosen based on certain criteria that included: 1) a frequency within the population of 25% or greater, 2) high (>10%) mortality rate, 3) multiple blood stream infections (BSIs), 4) recurrent sepsis, 5) recurrent sepsis with increased risk of death. The presence of SNPs were determined using Taqman genotyping. Rates of SNPs between repeat sepsis groups and controls as well as between survivors versus non-survivors were compared using chi-square test. Results: Forty cases and 40 controls were identified and analyzed. All 80 patients were heterozygous for IL10 1082A and CD14 260T, and were not further analyzed. Protein C 1641G and 1654C were 85% and 45% heterozygous, and 15% and 55% homozygous for the AA and CC alleles. Ten percent of patients demonstrated the IL10 CGG halotype. We found no significant difference between hetero and homozygotes in rate of repeat sepsis (p = 0.53-0.89) or mortality (p = 0.19-0.44). Conclusion: In the largest study of SNPs in repeat sepsis to date, we found no difference in the incidence of studied SNPs between patients with and without repeated sepsis. Objectives: To evaluate the performance of PCT measured at the time of CAP admission for predicting septic shock. Methods: As part of the CDC Etiology of Pneumonia in the Community (EPIC) Study, we enrolled adults ! 18 years old with clinical and radiographic CAP hospitalized at five hospitals in Chicago and Nashville from Jan. 2010 to June 2012. Serum was collected at the time of admission and PCT concentration was measured with the bioMerieux VIDAS BRAHMS assay. Septic shock was defined as the need for vasopressor therapy despite adequate fluid resuscitation within 72 hours of presentation. We compared PCT levels among CAP patients who did and did not develop septic shock using the Wilcoxon rank-sum test. We evaluated the predictive performance of PCT for septic shock by constructing a receiver operating characteristic (ROC) curve and enumerating sensitivity and specificity at 0.25 ng/ml and 0.5 ng/ml PCT cut-points, which have previously been identified as potentially useful for predicting severe disease. Results: In total, 1,864 adults were enrolled and 82 (4.4%) developed septic shock. Only 20.7% of patients who developed septic shock within 72 hours had a systolic blood pressure < 90 mm Hg at initial presentation. Median PCT level was significantly higher among patients who developed septic shock (2.21 ng/ml; IQR: 0.19 to 13.20 ng/ml) compared with those who did not (0.14 ng/ml; IQR: 0.04 to 0.73 ng/ml) (p < 0.01). The area under the ROC curve for PCT to predict septic shock was 0.73 (95% CI: 0.67 to 0.79) (figure). Using a 0.25 ng/ml PCT cut-point, sensitivity and specificity were 0.68 (95% CI: 0.57 to 0.78) and 0.61 (0.59 to 0.63), respectively. Using a 0.5 ng/ml cut-point, sensitivity and specificity were 0.63 (0.52 to 0.74) and 0.70 (0.67 to 0.72), respectively. Conclusion: PCT levels on admission may help identify CAP patients with increased risk of septic shock. Further study is needed to evaluate when in the time course of illness PCT levels may be most clinically useful and if they add value to standard risk stratification methods currently used. Background: Racial/ethnic disparities exist in out-of-hospital cardiac arrest (OHCA). Latinos are less likely to have bystander cardiopulmonary resuscitation (CPR) provided in the setting of an OHCA; however, little is known about why this occurs. One hypothesis is that there may be barriers to activating the 9-1-1 system that may then delay the arrival of emergency medical services (EMS) providers and the start of CPR. Objectives: To identify barriers to calling 9-1-1 for Latinos living in high-risk neighborhoods (as defined by high incidence of OHCA, low prevalence of bystander CPR) within Denver, CO. Methods: Previous research identified five high-risk neighborhoods in Denver, CO. These neighborhoods included mainly Latino residents with an annual median household income less than $35,000. Between August 2011 and March 2012, 64 participants were recruited from the five neighborhoods (Villa Park, Valverde, Baker, Westwood, and Whittier) using purposeful and snowball sampling. Six focus groups, comprised of 6-10 participants, and 9 key informant interviews were conducted. The interviews were transcribed, coded by three independent reviewers and a conceptual framework developed. All study team members, including community liaisons, reviewed and discussed the major themes to ensure the views of the community were represented. Data was analyzed using Nvivo software. Results: Participant demographics are listed in the table. Analysis revealed six main themes as resident barriers to calling 9-1-1: financial cost incurred by victim/victim's family (e.g. pay before transport), undocumented immigration status (e.g. EMS asking for papers prior to assisting), fear of getting involved (e.g. other family members that were undocumented in house), uncertainty in what an emergent situation was (e.g. lack of recognition of OHCA), language barriers (e.g. inability to communicate with 9-1-1 operator), and cultural issues (e.g. 9-1-1 in Mexico is different than in the U.S.). Conclusion: Interviews among primarily Latino residents of highrisk neighborhoods in Denver revealed numerous misconceptions and misunderstandings regarding 9-1-1 utilization. Community-based interventions may need to be developed and implemented to facilitate and encourage the use of 9-1-1 during emergency situations such as OHCA. Objectives: The objective of this study was to look at how a change in ED volume over time affects the association between ED characteristics and LOS or elopements based on ED size. The hypothesis is that LOS and LBTC will change with the change of ED operational metrics, and this change varies by ED size. The ED Benchmarking Alliance has collected yearly operational metrics since 2004. We included EDs providing at least two years data through 2011. ED sizes are defined by the following categories in ED volume: <20K, 20-40K, 40-60K, and over 80K. A linear mixed effects model was used to compare changes in overall LOS and LBTC with changes in volume, admission rate (%), pediatric volume (%), and arrival by EMS (%), and how this change varies by annual patient volume was assessed using interaction terms between volume and each of above ED characteristics. Results: Over the 8-year period, 524 EDs were included in the analysis. Association between overall LOS, LBTC, and ED characteristics varied by ED size. A significantly higher LBTC rate was observed with increased volume in EDs 20-40K and >80K. LBTC rates decreased with higher pediatric volume only for EDs >80K. EMS arrivals and admission rate were not significant in the LBTC model. LOS was found to increase significantly with the increase of ED volumes for EDs <20K and 20-40K, but not for EDs > 40K. Increases in admission rate were found to be associated with increasing LOS for EDs 20-40K, 40-60K, and 60-80K. Pediatric volume was negatively associated with LOS only for EDs of 60-80K and >80K EDs. Arrival by EMS differences increased the LOS for all volume categories except <20K. Objectives: This study assesses confidence and accuracy of CCPR performance in individuals who self-report previous training compared to those without previous training. were approached for CCPR training. Participants were taken to a semiprivate area in the ED and given a questionnaire before training that collected basic demographic information and history of CPR training. Knowledge, confidence, and likelihood of performing CPR were assessed with a 10-point Likert scale. Subjects demonstrated baseline CPR knowledge for one minute on a training manikin. Performance was judged for: 1) check responsiveness, 2) call help/9-1-1, 3) begin compressions immediately, 4) hand placement in center of chest, 5) compression rate (90-110/min), and 6) compression depth (>5 cm, confirmed by the manikin's auditory feedback device). Subjects were graded by individual component and composite performance. After baseline assessment, all subjects were given CCPR training on the same manikin. Independent t-tests were used to assess differences in means and Fisher's exact test was used for differences in proportions, and 95% confidence intervals are provided. Results: For 50 enrolled subjects, mean age was 36 (SD 14), 27/50 (54%) were Caucasian, 28/50 (56%) were female, and 30/50 (60%) reported prior CPR training. Subjects with prior CPR training performed more components correctly (mean 53% v. 42%, difference in proportions 11%, 95% CI 1-21%, p=0.03; table). No subject performed all components correctly, and there were no differences between groups on the individual components. Mean rating for knowledge and confidence about performing CPR was 5/10, likeliness to perform bystander CPR on a stranger (6/10) or a friend/family member (8/10), with no difference between groups. Conclusion: Persons with prior CPR training performed more components of CCPR accurately, but correct compression rate and depth were very poor. Prior training did not lead to higher confidence or likeliness to perform CCPR if needed. The Methods: Defibrillator records (E Series, ZOLL Medical) were prospectively collected during the treatment of consecutive adult OHCA patients by EMS providers in an urban EMS system in Arizona. Exclusion criteria: non-cardiac etiology, EMS-witnessed arrest. Data analysis: In addition, mixed effects multivariable logistic regression, with hospitals providing post-EMS care as a random effects variable, was used to assess the independent associations of commonly used metrics of CPR quality with survival to hospital discharge and positive neurological outcome (PNO, Cerebral Performance Category = 1 or 2). Fractional polynomial regression was used to confirm that CPR quality metrics were linear in the logit scale. Multiple imputation was used to account for missing CPR quality data. Results: Several AHA guideline CPR quality metrics were significantly related to survival (chest compression (CC) depth, CC recoil, percent of CCs ! 2 inches, and pre-shock pause) and positive neurological outcome (CC recoil, percent of CCs ! 2 inches, and preshock pause) and are shown in the Table 1 . Logistic regression results are shown in Table 2 . The percentage of CCs that were 2 inches or deeper during CPR was significantly associated with survival to hospital discharge after controlling for age, witnessed arrest, shockable rhythm upon EMS arrival, and provision of therapeutic hypothermia (21% increase in odds of survival per each 10-point increase in percent of compressions ! 2 inches). Conclusion: Several AHA guideline CPR quality metrics were significantly related to survival. This is the first study to show a significant relationship between CPR quality metrics and survival and neurological outcomes. Further research to confirm and extend these findings is needed. Background: ED crowding is an increasing problem and has been associated with adverse patient outcomes. ED expansion is one method that has been advocated to reduce ED crowding. Objectives: To determine the effect of ED expansion on measures of ED crowding. We performed a retrospective, before/after study using administrative data from two 11-month periods (11/1/09-9/30/10 and 11/1/ 10-9/30/11) before and after the expansion of the adult ED from 34 to 54 adult beds in an academic medical center. Relocation of the ED occurred on October 6, 2010. Data regarding ED volume and staffing as well as hospital admissions and occupancy were obtained from the electronic medical record and from administrative records. Primary outcomes were the left without being treated (LWBT) rate and total ED boarding time for admitted patients. A patient was considered to have LWBT if s/he left the ED prior to being evaluated by a physician. ED boarding was defined as the interval from one hour following admission bed request placement to the patient leaving the ED. A linear regression model was used to determine whether ED expansion was associated with the outcome measures. Results: Over the 11-month before expansion period, the median daily adult volume was 128 patients (IQR 118-137) and after expansion was 144 (IQR 134-156). The percentage of patients who LWBT declined from 9.0% before expansion to 8.3% after expansion. Total ED boarding time increased from 160 to 180 hours per day. After adjusting for ED waiting time, ED patient volume, ED trauma volume, patients treated in the ED urgent care, ED length of stay, the number of ward and ICU admissions, ED nurse staffing, elective surgical admissions, hospital occupancy, and ICU occupancy, both the decrease in LWBT (p=0.002) and increase in ED boarding hours (p<0.001) were independently associated with the ED bed expansion. Conclusion: An increase in ED bed capacity was associated with a decrease in the percentage of patients who LWBT but an unintended consequence of increase in ED boarding hours. ED expansion alone does not appear to be an adequate solution to ED crowding. The Conclusion: Changing to symptom-based MDRO isolation immediately improved ED LOS for patients with MRSA by over 2.5 hours and by 2.25 hours for VRE at C2 but not C1. This difference may be due to a lower proportion of private rooms at C2. Acquisition of MRSA was not affected, but it did slightly increase for VRE; however, VRE acquisition since has decreased to levels at or below that prior to the policy change (which remains in place). Further investigation of these issues is warranted as there was clear benefit at one campus without erosion of quality. Conclusion: AKI is prevalent early in the course of presentation among septic pediatric children and increased recognition by emergency physicians and pediatricians is needed. Future studies are needed to validate these findings and to understand the short-and long-term consequences of sepsis-related AKI. Methods: Peripheral blood samples were collected from adolescent females diagnosed clinically with PID in the ED, as well as from a similarly aged control group presenting to the OR for elective nonabdominal surgery. RNA was isolated and subjected to microarray analysis. Initial analysis was performed on a training set of 18 patients (9 PID patients with either Neisseria gonorrhea (GC) or Chlamydia trachomatis (CT) infection and 9 control patients). Both supervised and unsupervised cluster analysis was performed, followed by network analysis using Ingenuity software. The training set was used to classify a set of 15 additional patients with clinical PID who did not have GC or CT and 2 controls. Results: Supervised cluster analysis of the training set revealed 170 genes which were differentially expressed in PID patients vs. controls (figure). Network analysis indicated that several of the differentially expressed genes are involved in immune activation. Analysis of the additional PID patients based on the training set findings revealed that patients with positive testing for Trichomonas vaginalis partitioned with the PID group, while patients with no organism identified partitioned with both groups. In order to investigate the fraction of gene expression variability which might be explained by known clinical and laboratory values, we evaluated the association of a composite gene expression profile variable with white blood cell count, duration of symptoms, urinary tract infection, and C-reactive protein (CRP). These results demonstrated possible contribution from CRP. Conclusion: RNA sample collection from adolescents in the ED is feasible. Genes were identified that were differentially expressed in PID patients vs. controls, many of which are involved in inflammation. Future studies should confirm the training set findings on a larger sample, and may lead to improved accuracy of PID diagnosis. Background: Early goal-directed therapy (EGDT) delivered in a protocolized system can improve outcomes in pediatric sepsis. Timely patient recognition and ongoing data collection for outcome assessment are barriers to optimal delivery of resource-intensive EGDT. The electronic health record (EHR) offers the benefit of automated data capture. While instituting a clinical protocol for EGDT in sepsis, we simultaneously developed a sepsis registry linked to the EHR for data acquisition and quality improvement. Objectives: To use an EHR sepsis registry in a pediatric emergency department (ED) to describe clinical characteristics of sepsis patients and track timeliness of recognition and treatment. Methods: The sepsis registry was designed and built through collaboration between pediatric emergency medicine, critical care, and medical informatics teams using REDCap electronic data capture tools. Patients with suspected sepsis were included if an ED physician identified any two of the following: vital sign criteria for systemic inflammatory response syndrome (SIRS), abnormality in mental status or perfusion, or underlying high-risk condition. For these patients, the ED physician used an EHR order set to activate the sepsis protocol, which also automatically enrolled the patient in the EHR sepsis registry. We used standard descriptive statistics to describe demographics, comorbidities, microbiologic results, and disposition for enrolled patients. Median intervention times were tracked monthly and evaluated with run charts. Results: 122 ED patients were identified from 11/2011-10/2012. 62/ 122 (50.8%) patients were admitted to an intensive care unit (ICU), 55/ 122 (45.1%) to the inpatient floor, and 5 (4.1%) were discharged. 61 (50%) patients had at least one comorbidity. One patient died in the ICU. Bacterial infections (SBI) were identified 42.6% of patients, and viruses in 31.1%. Patients admitted to the ICU were more likely to have SBIs (RR 2.5 95%CI 1.5, 4.0). Improvements in median time from sepsis recognition to initial antibiotics and IV fluids were noted over the study period (figure). Conclusion: An EHR sepsis registry, in combination with a sepsis clinical protocol in the ED, can identify and track ED patients with suspected sepsis, and effectively support quality improvement activities. and American Academy of Pediatrics (AAP) recommend universal HIV screening for all patients ages 13 to 64 years in all health care sites, including the ED. Since these guidelines, national practice patterns and barriers to adolescent HIV screening have yet to be assessed. Objectives: To assess current guideline knowledge, practice, and perceived barriers to ED-based adolescent HIV screening. Methods: An anonymous web-based survey was administered to attending-level physicians of the AAP Section on Emergency Medicine. Eligible subjects included US residents who provided ED care for patients < 21 years of age. Knowledge of current CDC guidelines, practice patterns for ED-based HIV screening, beliefs and barriers were assessed using Likert scale and multiple-choice responses. Descriptive and comparative analyses were performed to evaluate factors associated with HIV screening. conditions such as trauma must take place from the population perspective, but most outcomes analyses are performed at the level of the patient or hospital. Population-level evaluation is necessary to inform future adjustments to state trauma systems. Objectives: We performed a population-level analysis to assess the relation between access to trauma care and injury mortality. We calculated population access to definitive trauma care for each county in the US using census data, trauma center location, and estimated prehospital times. Injury mortality rates were calculated for all US counties using data from the National Center for Vital Statistics. Injury deaths that did not take place in the county of residence or a contiguous county were excluded. Access was examined as a dichotomous variable (+/-60 minutes and +/-90 minutes), as an ordinal variable (<30, 30-45, 45-60, 60-75, 75-90, 90+) , and as a continuous variable. Unadjusted and adjusted analyses were performed using negative binomial regression. Adjusted analyses controlled for age, sex, race, and percent unemployed. Results: Just over half (n = 1,685, 53.6%) of US counties have access to definitive trauma care within 60 minutes, and counties have injury death rates ranging from 0 -440/100,000. In unadjusted analyses, counties without access to trauma care within 60 minutes had higher rates of injury death when compared to counties with access to trauma care within 60 minutes (IRR 1.14, 95% CI 1.09-1.18). In a fully adjusted model, the effect was attenuated, but counties without access to trauma care within 60 minutes still maintained higher rates of injury death (IRR 1.06, 95%CI 1.00-1.12). Conclusion: We describe the use of a population-level measurement of injury death to assess the trauma system's effect on injury-related population health. We found that better population access to trauma care is associated with lower injury death rates. Population-level outcomes measures could be used to measure the effectiveness of regional systems of care for trauma and other unplanned, time-sensitive conditions, and could additionally serve as quality metrics to incentivize regional hospital cooperation to improve outcomes for their community. Objectives: We evaluated whether the accreditation of new level II and III TCs resulted in a change in the trauma patient census and severity at a nearby Level I trauma center. TCs in Pennsylvania over the past 10 years were obtained from the PA Trauma Systems Foundation. The Level I trauma center (TC-A,) was active for the entire period. A Level II TC 39 miles away was accredited after 70 months (TC-B,), one Level III TC 46 miles away was accredited after 95 months but lost accreditation after 11 months (TC-C), and two other Level III TCs 40 miles and 45 miles away were accredited after 107 months (TC-D, TC-E). Interrupted autoregressive integrated moving average (ARIMA) modeling was used to test whether reductions occurred in volume at the Level I TC. Tests of proportions compared ISS over time. Results: Monthly patient counts at the Level I TC increased over the study period and summed to 25,029 patients total. The time series for the Level I TC was fit with an ARIMA (0,1,1)(0,1,1,12) model. The number of patients treated monthly at the Level I TC decreased 10.8% (p<0.05) when TC-B was accredited and decreased an additional 12.9% (p<0.05) when TC-D and TC-E were accredited simultaneously. No change stemmed from the temporary accreditation of TC-C. As a result of the accreditations, the Level I TC treated 1,903 fewer patients than expected over a 51-month period, an 11.9% reduction in volume. The percent of patients at the Level I TC with ISS>15 was statistically but not clinically significantly higher during the last 13 months after TC-D and TC-E were accredited compared to the 69 months before the first new accreditation occurred (30.1% vs 28.4%, p<0.05). Conclusion: Accrediting Level II and Level III TCs reduced patient volume and increased severity but not meaningfully at a Level I TC nearby. Strategic planning of statewide trauma systems can help balance rapid access to care with maintenance of adequate annual patient volumes of critically injured patients. Background: Injury is a major contributor to morbidity and mortality in the US and access to trauma care is a Healthy People 2020 priority. The degree to which disparities in access to trauma care exists in the US is unknown. An improved understanding of disparities in access to care will best enable the strategic development of trauma care in the US. Objectives: We sought to describe geographic, racial, and socioeconomic disparities in access to trauma care in the US. We hypothesized that traditionally vulnerable populations would be less likely to have 60-minute access to trauma care. Methods: This was a retrospective cross-sectional population level analysis. We used trauma center access data from www.traumamaps. org, demographic data from the 2005-2009 American Community Survey and 2010 Neilson Claritus estimations, and county-level demographic data from the 2011-2012 Area Resource File. All analyses were performed at the level of the block group (BG). Our main outcome measure was access to a Level I or II trauma center within 60 minutes via ambulance or helicopter. Our main exposures were population subgroups described above. We performed adjusted analyses using logistic regression. Results: Using adjusted logistic regression, access was significantly (P<0.001) higher in areas with high proportions of non-white and Hispanic population, as well as areas with high per capita income and in urban areas. Access was significantly lower in areas with high rates of poverty and high proportions of uninsured. When we restricted this analysis by area of the country, we found that these same relations held when we looked at urban areas alone. However, when we restricted to rural areas, we found all of these relationships became the opposite except for per capita income, which was not significant, and proportion of uninsured, which was still associated with lower access. We found disparities in access to trauma care in the US as a whole; however, the nature of the disparity differed by rurality; some populations that were more likely to have access in urban areas were less likely to have access in rural areas. While the majority of the US has trauma center access within an hour, 29 million Americans still lack access. The disparities in access affecting vulnerable populations must be addressed as the trauma system continues to expand. To determine to what degree LGBT health is taught in EM residency programs and to determine whether program demographics affect inclusion of LGBT health topics. Methods: An anonymous survey link was sent to EM residency program directors via the CORD listserv. The 12-item descriptive survey asked the number of actual and desired hours of instruction on LGBT health issues in the past year. Perceived barriers and program demographics were also sought, including state, faculty employer type, same-sex domestic partner benefits, and presence of LGBT faculty and residents. Results: There were 124 responses to the survey with a response rate of 78%. Of the respondents, 74.4% reported they have not presented specific LGBT lectures or incorporated topics affecting LGBT health in their didactic curricula (66.9%) in the past year. The most significant barrier cited was the perceived lack of need for education on LGBT health (58.7%). The majority of respondents knew open LGBT faculty (64.2%) and residents (56.2%) in their programs. EM programs presented from 0 to 8 hrs on LGBT health averaging 45 minutes of instruction in the past year. EM programs support inclusion of 0-10 hrs of dedicated time to LGBT health, with a mean average of 2-3 hours suggested. Conclusion: The majority of EM residency programs do not have curricula specific to LGBT health, although most programs desire inclusion of these topics. Further curriculum development is needed to better serve the LGBT EM population. Background: Eight million health care workers are exposed to blood and body fluids annually. According to one study, 56% of emergency medicine residents reported blood exposure during training, yet only 46.7% of these exposures were reported to health care providers. Objectives: We sought to determine risk factors for resident needlesticks, reasons needlesticks are under-reported, use of prophylactic medications, and psychological consequences of needle sticks. Methods: Six-hundred and eighty-two U.S. emergency medicine (EM) and EM-combined residents completed an anonymous online 21-question multiple answer survey. Survey topics included number and circumstances of hollow-bore needlesticks, infectious disease status of patient, use of prophylaxis, and psychological consequences. Results: Six-hundred and eighty-two residents responded to a survey distributed to every U.S. emergency medicine (EM) or EMcombined residency program listed in the American Medical Association residency database, FREIDA. Twenty-eight percent of residents reported at least one hollow-bore needlestick. Most needlesticks occurred in the emergency department (78.1%) and occurred while performing central line placement (39.6%). Most residents reported feeling "rushed" during the procedure (82.7%), though only 63.8% of procedures were performed during a code or resuscitation. Twenty-three percent of patients were known to have an infectious disease (HIV or hepatitis). Many residents felt hesitant to report (41.7%). The most common reasons cited were 1) embarrassment, 2) not wanting to do paperwork, and 3) perceived insignificance. Some residents reported feeling "very distraught" the day of the needlestick (22%). Sixty-three percent did not take prophylaxis, most commonly because of negative patient labs (84.6%), side effects (0.1%), and cost (0.015%). Conclusion: Residents are not infrequently stuck with hollow-bore needles and are hesitant to report due to embarrassment, perceived insignificance, and paperwork, putting them at an increased risk for infection. Though not all needlesticks occurred during codes or resuscitations, most residents reported "feeling rushed" during the procedures. Background: Medical malpractice is a common and often feared aspect of emergency medicine. Physician education regarding how to respond to questioning and prepare for litigation or depositions is not often and not well taught. Objectives: To conduct a medical malpractice mock trial competition to teach emergency medicine residents the process of medical malpractice litigation, and help develop basic deposition and trial testimony skills. Methods: Ten residents in an academic emergency medicine program volunteered to act as defendant-physician witnesses in a medical malpractice mock trial competition, which was held at a local law school. Residents testified two or three times and after each appearance were provided contemporaneous verbal and written feedback, as well as a copy of their videotaped testimony, to help prepare for subsequent rounds of testimony. Four judges rated each resident using a 9-question survey scored on a 10-point Likert scale. The scores were compared as a group between the initial and subsequent rounds of testimony in the mock trial. Results: Participants demonstrated significant improvement in seven of nine categories on the survey. P-values reached significance in the following areas: Worked Well on Direct Examination (p < 0.001), Demeanor/Body Language (p < 0.001), Was Not Arrogant/Did Not Lose Poise on Cross-Examination (p = 0.001), Convincing Witness (p = 0.001), Appeared Knowledgeable (p = 0.012), Courtroom Attire (p = 0.012), and Expressed Themselves Clearly (p = 0.017). In the remaining two categories, scores were higher, but not significant. Conclusion: This novel educational collaboration, in the form of a medical malpractice mock trial competition at a law school, proved successful in teaching resident physicians in a hands-on, non-didactic setting about the process of malpractice litigation. Communication and presentation skills improved, and knowledge regarding issues of documentation and consequences of medical errors expanded. The collaborative nature of this project between a medical residency program and a local law school helped solidify it as a permanent feature of the residency program. to safety and quality in the emergency department (ED), and breakdowns in this process may lead to unsafe conditions or adverse events. Objectives: The objective of this study was to test the hypothesis that the quality of patient handoffs would improve after implementing a structured handoff method. Methods: This was an observational study, and data were collected prospectively. All data were collected at a free-standing, tertiary-care children's hospital. We developed a handoff tool after researching existing literature. This tool contains five components judged essential by our expert consensus group. The tool is described by the mnemonic SOUND -Synthesis, Objective Data, Upcoming Tasks, Nursing Input, and Double Check. We implemented SOUND through a mandatory educational module and reminder signs posted in ED team changeover areas. We measured the completeness of handoffs before and after implementation of the tool and used statistical process control (SPC) to measure the effects of the intervention. We defined a successful handoff as one in which four of the five components were included. As a balancing measure, we measured mean time per patient discussed before and after the implementation of the SOUND Model. Results: We observed 638 patient handoffs, 286 pre-intervention and 352 post-intervention. As demonstrated in the figure below, there was a significant increase in percentage of successful handoffs after implementation of SOUND. This improvement was evident in both trainees and staff physicians, and was associated with a mean increase in handoff time of 20 seconds per patient (52.9 vs. 73.0 seconds, p<0.005). The implementation of a structured handoff tool, SOUND, was associated with improved completeness of patient handoffs in the pediatric emergency department at all levels of training, with only a modest increase in the amount of time required to discuss each patient. Objectives: This pilot study was designed to extend prior work on resident psychological wellness. The central goals were to replicate and longitudinally validate the Brief Resident Wellness Profile (BRWP), an empirically validated tool for assessing residents' sense of professional satisfaction and mood, and to examine associations between psychological wellness and self-reported sleep quality over time. Results: Forty-seven residents provided at least one observation, for a total of 108 total observations across 6 months. To investigate the potential time-based associations between self-reports of sleep quality and wellness, we specified a series of multilevel models to characterize changes in these variables over the study period. No systematic changes were observed in either of these variables; thus, on average, the participants in this study did not report significant decreases in their psychological well-being or increases in sleep disturbance over the study period. We did, however, observe a significant lagged association between these variables. This association was specific from increased sleep disturbances to decreased wellness a month later (B = -0.70, SE = 0.34, t = -2.07, p = 0.04); we reversed the direction of the analyses to explore the lagged effect from resident well-being to sleep disturbances, and this effect was not significant (B = -0.01, SE = 0.02, t = -0.77, p = 0.44). Conclusion: Results from this pilot study suggest that residents' reports of sleep problems predict decreases in their overall wellness up to a month later. A limitation of this work is the low number of responses. However, we hope that continued data collection will further validate these associations between sleep quality and resident wellness. Objectives: To evaluate the accuracy of EM physicians' SA of direct laryngoscopic intubation skills and to evaluate the relation between actual skill and perception. Methods: In this prospective correlation study at an urban community teaching hospital, 44 EM attending physicians performed direct laryngoscopic endotracheal intubation on a TruCorp Airsim mannequin. Performance was evaluated against a checklist of 11 predefined procedural steps by a minimum of two peer physician raters. One point was awarded for each step. Additionally, psychomotor adeptness was rated on a Likert scale from 0-10, with 0 representing "Struggle" and 10 representing "No Struggle". An overall proficiency score was calculated by adding the checklist points to the adeptness rating and was expressed as a percentage of the total possible points. Following the procedure, physicians self-assessed their facility at performing the intubation using the same Likert scale. Pearson's correlation coefficient was used to determine the strength of correlation between self and peer assessments. Subsequently, the degree of disparity between self and peer assessment scores was calculated, and the association between one's overall intubation proficiency score and this disparity was evaluated with Pearson's correlation coefficient. Results: SA scores ranged from 5 to 10 (mean = 8.1, SD = 1.2); peer assessment scores ranged from 1 to 10 (mean = 7.9, SD = 2.1). Overall proficiency scores ranged from 33% to 100% (mean = 81%, SD = 16%). There was disparity between self and peer assessments as they only correlated moderately (r = 0.6, p < 0.001). The degree of this disparity had a strong negative correlation with one's overall proficiency at the procedure (r = -0.84, p < 0.001) as illustrated in the figure. Conclusion: Physician self-assessment of intubation proficiency correlates only moderately to peer assessments. Self-assessment becomes increasingly inaccurate as skill declines such that the lowest performing physicians are at greatest risk for an inflated perception of proficiency, and consequently are less likely to recognize the need for self improvement. Objectives: To determine the number of DL intubations performed in an academic emergency department with an emergency medicine residency training program. Methods: Data were collected prospectively on all patients undergoing tracheal intubation in an academic emergency department over a five year period (2008) (2009) (2010) (2011) (2012) . Following each intubation, the operator completed a standardized data form, including the patient's demographic information, as well as device(s) chosen to perform the intubation. Patients who were intubated using DL were compared to those intubated using a VL (GlideScope or C-MAC). Results: Over the five-year study period, the percentage of intubations performed with DL decreased from 55.9% to 24.2%. During the same time period, the percentage of intubations performed with VL increased from 36.1% to 72.6%. Please see table below for further results. Objectives: To identify parameters that correlate with ET skills degradation and determine the optimal timing of competency assessments for maintenance of certification. Methods: In this cross-sectional study at an urban community teaching hospital, 44 board-certified EM physicians were individually administered proficiency assessments for ET by direct laryngoscopy using a TruCorp Airsim mannequin. A minimum of two board certified EM physicians recorded performance scores using a standardized assessment tool that evaluated for completion of procedural steps and psychomotor adeptness. The electronic medical record was then queried for each physician's time interval since last supervising or last performing an ET as well as his or her average number of ETs per year, either supervised or individually performed. Pearson's correlation coefficient was calculated to identify the strength of correlation between assessment scores and these characteristics. Subsequently, ROC analysis was conducted on the characteristics to identify parameters that predict an assessment score less than 80%. Results: The mean assessment score was 81% (95% CI = 76% -86%). Scores had moderately strong correlation with average ETs performed/ yr (r = 0.6, p < 0.001), average ETs supervised/yr (r = 0.6, p = 0.001), and the interval since last supervised ET (r = -0.5, p = 0.002); however, the interval since last performed ET correlated poorly (r = -0.3, p = 0.07). ROC analysis identified, with good accuracy, that physicians score below 80% on assessments when average ETs performed/yr is below 1.2 (area = 0.87, p = 0.001), average ETs supervised/yr is below 2.5 (area = 0.81, p = 0.006), or if the last supervised ET exceeds 78 days (area = 0.83, p = 0.001). Conclusion: The average number of intubations per year, either performed individually or supervised, correlates well with assessment scores for this procedure. Physicians are at risk for poor procedural performance if they do less than 1.2 intubations per year or supervise less than 2.5 per year. Further research is warranted to validate the assessment tool and identify additional influences on intubation skills retention. Objectives: We sought to determine whether multiple tracheal intubation attempts performed by an operator were associated with a decreased success rate. Methods: Design and Setting: We conducted an analysis of a multicenter prospective registry (Japanese Emergency Airway Network registry) of EDs at 13 academic and community hospitals in Japan between March, 2010 and March, 2012. Data fields include patient and operator demographics, method of airway management, medications, number of attempts, and adverse events. Participants: All patients undergoing emergency tracheal intubation in ED were eligible for analysis. Primary analysis: We described intubation success rate and operator characteristics at each attempt. We further assessed the factors associated with success at repeated attempts using multivariable logistic regression models. Results: The database recorded 3872 encounters (capture rate 98%); 3858 met the inclusion criteria. Success rate of initial attempt were 2631/3858 (68%; 95%CI, 66%-69%). Success rates of second to fourth attempts performed by the same operator were 453/793 (57%; 95%CI, 53%-60%), 59/116 (50%; 95%CI, 41%-59%), and 13/29 (44%; 95%CI, 28%-62%), respectively (p=0.038 for trend). Independent predictors for intubation success at the second attempt included change in operator (OR, 4.29; 95%CI, ), change to senior operator (OR, 1.91; 95% CI, 1.46-2.52), and patient age (OR per 10-year increase, 0.92; 95%CI, 0.87-0.98). Among the patients who underwent second attempts by the same operator, independent predictors for success were change in method of intubation (OR, 2.23; 95%CI, 1.03-4.98), and attempts by senior operator (OR, 2.09; 95%CI, 1.54-2.84). Conclusion: In this multi-center prospective study in Japan, we observed that repeated intubation attempts performed by the same operator were associated with a decreased success rate. Implementation of rescue methods and backup by senior operators may increase success rates in emergency airway management. Complications of Airway Management Following Failed Noninvasive Ventilation Jarrod M. Mosier, Lisa A. Graham, Gordon E. Carr, and John C. Sakles University of Arizona, Tucson, AZ Background: Patients requiring intubation following failed noninvasive positive pressure ventilation (NIPPV) have a higher risk of mortality than patients intubated primarily. With the use of NIPPV in the emergency department (ED) and intensive care unit (ICU) increasing, complications of intubation after failed NIPPV are unknown and need to be investigated. Objectives: This study will evaluate patients requiring intubation after failed NIPPV to determine if there is a higher complication rate from endotracheal intubation than patients intubated primarily. Methods: Prospective quality improvement (QI) registry of ICU airway management in the 20+ bed medical ICU of a 450+ bed university medical center and the 12 bed mixed ICU of a universityaffiliated community hospital. All intubations were performed by pulmonary/critical care, critical care medicine, emergency medicine, or family medicine services. After each intubation in the ICU, the operator completed a standardized QI form which included patient demographics, clinical data, predictors of difficult airway (DAPs), method of airway management, medications, outcomes, and complications. Complications included desaturation, esophageal intubation, hypotension, dental trauma, mainstem intubation, or other. Objectives: To determine the effect obesity on the first-attempt failure of ED intubations. Our hypothesis was that obese patients had lower first-pass success. Methods: This was an analysis of a prospective continuous quality improvement database of 2,457 intubations performed in an academic ED over a 5-year period. Following each intubation, the operator completed a data form for multiple aspects of the intubation, including whether they felt the patient was obese. Multivariable logistic regression was used to investigate the association between patient obesity and first-attempt intubation failure, controlling for potential cofounders and independent risk factors for intubation failure. Results: Of the 2,457 total patient's intubated, 362 (14.7%) were identified as obese. Operator-identified obesity was significantly associated with increased first-attempt intubation failure in both univariate and multivariable analyses. First-attempt intubation failure was 493/2052 (24.0%) for non-obese patients vs. 130/362 (35.9%) in obese patients (absolute percentage difference: 11.9, 95% CI: 6.6, 17.2). The crude OR for intubation failure in patients assessed as obese vs. non-obese was 1.8 (95% CI: 1.4%, 2.3) and the adjusted OR was 1.4 (95% CI: 1.1, 1.9), controlling for age, sex, intubation method (video vs. direct laryngoscopy vs. other method), intubator training level (attending physician, residents in year 1-3, and medical students), as well as several difficult airway characteristics (presence of blood and/ or vomit in airway, a large tongue, airway edema, and small mandibles). Conclusion: Obesity is associated with a significantly reduced firstpass success rate when performing intubation in the ED. Patients who were identified as obese had 42% increased odds of first-attempt intubation failure. Identifying obese patients as a potentially difficult intubation may help providers choose appropriate intubation methods and devices to increase the likelihood of first-attempt success. Background: Current studies support the use of nasal cannula oxygenation administration during rapid sequence induction and intubation in order to extend the duration of safe apnea. In contrast, facemask oxygen administration is utilized in minimally interrupted cardiac resuscitation protocols to provide passive oxygen delivery during compression-only CPR. Little is known about the relative efficacy of these two oxygen delivery methods during apnea. Objectives: This study was designed to test the relative efficacy of nasal cannula compared to non-rebreather facemask in delivering oxygen to the lower airway during apnea in a mannequin model. Oxygen was delivered at various flow rates (5-30 L/min) through the mouth or nose of an airway mannequin normally used for training health care providers in use of airway adjuncts and direct laryngoscopy. Four different oxygen delivery methods were tested: simple nasal cannula (NC) without nasopharyngeal airway (NPA), NC with a 30 French NPA placed through each nare, non-rebreather facemask without oropharyngeal airway (OPA), and non-rebreather facemask with OPA. Steady state airflow was measured through the trachea at the level of the right mainstem bronchus using a pneumotachometer (MedGraphics Elite Series Plethysmograph). The left mainstem bronchus was closed while the right mainstem bronchus was open to the atmosphere during data collection. Results: Measured airflow (L/min) through the trachea according to oxygen delivery method: Conclusion: In this mannequin model, apneic oxygen delivery by nasal cannula was more efficacious than delivery by facemask at equivalent flow rates. The use of nasal cannula with bilateral nasopharyngeal airways resulted in markedly improved oxygen delivery compared to nasal cannula alone. Since the airway mannequin has no inherent airway obstruction, this superior efficacy provided by addition of the NPA is likely due to improved airflow dynamics through the upper airways. Future studies in apneic oxygenation should attempt to confirm this effect in human patients. Additionally, apneic oxygenation via high-flow nasal cannula should be considered in future clinical trials of compression-only CPR. Objectives: To determine the incidence and duration of hypoxemia during ED RSI and to ascertain if there is a relationship to preoxygenation and the number of intubation attempts. Methods: This is an IRB-approved observational study of existing practice conducted between 09/2011 and 07/2012 at an urban, academic, Level I trauma center ED using BedMasterEX TM (BMEX) data acquisition software, which prospectively records all numerical and waveform vital sign data from patient monitors every 5 seconds. The nursing record was utilized to determine the time paralytics were administered and the time tube placement was confirmed. Exclusion criteria included age < 18, server error, inadequate documentation, or a poor SpO2 waveform. Data were non-parametric and were analyzed using Mann-Whitney U. Results: 262 RSI attempts were conducted in the ED during the study period of 10 months. 96 attempts were excluded, leaving 166 patients included for analysis. The study group was 73% male with a median age of 51 (18 to 95). 75% of cases were pre-oxygenated to ! 93%. The median nadir SpO2 during intubation with preoxygenation SpO2 ! 93% and <93% were 95% (88, 98) and 84% (70, 89), respectively, which were significantly different (p<0.01). Desaturation less than 90% and 80% occurred in 36% and 19% of cases with a mean duration of 46AE8 sec and 19AE7 sec, respectively. More than one intubation attempt was needed in 25% of cases. The table compares the incidence and duration of desaturation for 1 attempt and >1 attempt. Conclusion: This study used a new data acquisition system to continuously record vital sign data during ED RSI. Desaturation to less than 90% occurred in 36% of patients for a mean duration of 46 seconds. Desaturation was significantly correlated with multiple intubation attempts and with preoxygenation SpO2 <93%. These data support the importance of strategies to maximize preoxygenation and first attempt success during ED RSI. Background: In sub-Saharan Africa, anemia is a significant contributor to mortality amongst patients with various medical and surgical conditions. Physicians routinely rely on simple clinical signs to detect anemia, grade its severity, and initiate necessary empirical resuscitation. Objectives: The study determined the physician's diagnostic accuracy in using clinical gestalt for detecting anemia and its severity among patients seen at the ED of a tertiary referral hospital in Tanzania. Methods: This was a prospective study of the diagnostic accuracy of physician clinical gestalt for detecting the presence and severity of anemia among patients seen in our ED over 2 months. A structured data sheet was completed by the physician before the lab results were available to ensure that the data accurately reflect the unbiased clinical Objectives: The primary aim of this study was to assess the microcirculatory function in post-cardiac arrest patients using microcirculation flow index (MFI). We hypothesize that microcirculatory dysfunction occurs in post-cardiac arrest patients and that better microcirculatory flow will be associated with improved neurological outcome. We conducted a single-center, prospective, observational study in an urban, university tertiary referral center from 9/09 to 11/10. We included adult patients with non-traumatic cardiac arrest, severe sepsis/septic shock, and controls (without acute illness). The sublingual microcirculation was imaged using Sidestream darkfield videomicroscopy with a Microscan (Microvision Medical, Netherlands) at 6 and 24 hours in post-cardiac arrest patients, and within 6 hours of emergency department admission in sepsis and control patients. MFI: a semi-quantitative method from Spronk et al. for image analysis (0: no flow to 3: normal flow). Good neurological outcome was defined as a Cerebral Performance Category of 1 or 2. We used Wilcoxon rank sum test to compare MFI between groups and general linear modeling for the adjusted analysis. We enrolled 30 post-cardiac arrest, 16 septic and 9 control patients (table). MFI was significantly impaired in post-cardiac arrest patients at 6 hours (2.6, IQR: 2 -2.9) and 24 hours (2.7, IQR: 2.3 -2.9) compared to controls (3.0, IQR: 2.9 -3.0; p=0.004 and 0.02 respectively). There was no difference in septic patients (2.8, .0) and postarrest patients in the unadjusted linear model. After adjustment for initial APACHE II score, post-cardiac arrest patients had significantly lower MFI at 6-hours compared to sepsis patients (p = 0.03) (figure). In the post-cardiac arrest group, microcirculation flow was significantly higher at 24 hours in good versus bad neurologic outcome (2.9, IQR: 2.4 -3.0 vs 2.6, IQR: 1.9 -2.8; p = 0.03). Conclusion: Microcirculatory dysfunction occurs early in postcardiac arrest patients. Better microcirculatory function at 24 hours is associated with good neurologic outcome. Objectives: We hypothesized that neurologically intact survivors would be more likely to be normoxic rather than hypoxic or hyperoxic. Methods: We conducted a retrospective chart review of 179 postarrest patients treated with therapeutic hypothermia who were entered into the Penn Alliance for Therapeutic Hypothermia (PATH) database from 11 institutions. Demographic variables were analyzed using chisquare tests. Logistic regression analyses were performed to assess the relationship between hypoxia (PaO2 equal to or less than 60 mmHg), normoxia (PaO2 between 60-300 mmHg), hyperoxia (PaO2 greater than 300 mmHg), mortality, and neurologic outcomes at first post-arrest arterial blood gas and at 12 hrs, 24 hrs, and 48 hrs post-arrest. Results: A total of 179 patients were enrolled. Patients were 59.6 AE 16.4 years, 54% male, and 54% Caucasian. The presenting rhythm was VF/VT in 31% of patients. Sixty-six (37%) survived to discharge, and of these 50 (76%) were discharged with good neurologic outcomes (cerebral performance category 1 or 2). We found that an increasing percentage of patients in this study had PaO2 values in the optimal range at consecutive time points (51% 0 hr; 82% 12 hrs; 90% 24 hrs; 93% 36 hrs; 92% 48 hrs) and significantly higher initial mean PaO2 Emergency Medicine Resident Leadership Ability: A Simulation-Based Longitudinal Study Matthew C. Carlisle, Samuel Clarke, Timothy Horeczko, Joseph D. Barton, Vivienne Ng, Sameerah Al-Somali, Gupreet Bola, and Aaron E. Bair UC Davis, Sacramento, CA Background: As emergency medicine (EM) residency programs prepare for the Next Accreditation System, outcome data demonstrating mastery of the core competencies and non-technical skills will become essential. Assessment in a simulated clinical environment has been proposed as a means of evaluating these new EM Milestones, but evidence for its ability to assess longitudinal growth is lacking. Objectives: We sought to determine whether a previously validated leadership and communication tool, the Ottawa Crisis Resource Management Global Rating Scale (Ottawa GRS), reflects longitudinal growth of crisis resource management (CRM) skills among EM residents. We hypothesized that residents would demonstrate significant gains in CRM skills as they progressed through each year of residency. Methods: Forty-five EM residents were tracked longitudinally during their annual skills assessments between 2006 and 2011. Participants were required to manage standardized simulated critical patients and were graded in real time by multiple faculty raters. A mixed-methods repeated measures regression analysis was used to compare elements of the Ottawa GRS (Overall, Leadership, Problem Solving, Situational Awareness, Resource Utilization, and Communication) to level of training in a three-year residency program. Results: There was a general increasing trend in Overall Ottawa GRS score over participants' training (scale of 1 to 7, 7 is best performance). Mean Overall scores were: PGY 1: 4.40 (IQR: 3.67 to 5.00); PGY 2: 5.74 (IQR 5.50 to 6.00); and PGY 3: 5.67 (IQR 5.33 to 6.00) (See table) . Individual overall performance scores trended by PGY level can be seen in the figure. In a multivariate longitudinal model, there was a significant difference in all components of the Ottawa GRS between PGY 1 and PGY 2 levels; there was no statistically significant difference in performance between PGY 2 and PGY 3 levels. The Ottawa GRS instrument reflects longitudinal growth in CRM skills differently over the course of an EM residency. Possible explanations include limited discriminative ability of the Ottawa GRS, non-linearity of the assessment tool, or need for more levelspecific simulation scenarios. Background: Emergency medicine (EM) physicians are expected to perform certain rare but important clinical procedures. Faculty in EM training programs may perform these procedures less frequently than other EM physicians, as they share them with residents. Many EM residency programs use simulation, but it is less utilized for faculty training. Objectives: To determine if completing a 2-hour simulation-based rare procedure lab improved EM faculty participants' self-rated confidence in the ability to perform and teach the procedures safely and effectively. Methods: This was a prospective, observational cohort study using a pre-and post-survey methodology for EM faculty physicians of an urban, Level I trauma center. A 16-item visual analog scale (VAS; 100mm) questionnaire administered before and after a standardized, simulation-based learning module assessed their ability to efficiently and safely perform and teach four different rare procedures: thoracotomy, lateral canthotomy, retrograde intubation, and ultrasound-guided IJ placement. Descriptive statistics were used to describe participants' experiences with each procedure. Wilcoxon signed rank test was used to compare pre-and post-training survey results. Results: Between February-April 2012, 20 EM faculty physicians completed the training. Physicians reported the most baseline experience with ultrasound-guided IJ placement and the least experience with lateral canthotomy. Participants' self-reported confidence in their ability to perform safely and efficiently and to teach each of the four procedures improved significantly after training (p < 0.05 for all comparisons). The average change in VAS score from preto post-training was largest for lateral canthotomy (efficiency residents at the lead institution (13 PGY-1s and 12 PGY-3s) in random order. Sessions were videotaped for independent review by two faculty observers. Data included the total number of critical actions (CA) achieved, time to each critical action (TCA), and a previously validated clinical performance evaluation (CPE) score. The CPE score is comprised of eight criteria, each with an eight-point possible score, with 8 being "excellent" and a 1 being "poor". The reported case scores are the average of the eight values. Descriptive statistics, Wilcoxon rank sum tests, and repeated measures analyses of variance using generalized estimating equations are reported. Results: For all of the cases, the mean proportion of CAs performed was 0.94 by the PGY-1s and 0.91 by the PGY-3s (p>0.05). For TCA, 11 CAs were analyzed and only two were found to have a significant difference: the PGY-1s had a time to second defibrillation attempt of 236 seconds compared with 291 seconds for the PGY-3s, and the PGY-3s had a time to epinephrine of 112 seconds compared to 164 for the PGY-1s (p<0.05). The overall mean CPE scores were 5.72 for PGY-1s and 6.38 for PGY-3s (p<0.05). The mean difference in CPE scores between the faculty observers was -0.39 (95%CI -0.55 to -0.23 Objectives: To compare CVC procedural competencies between medical and surgical specialty residents through simulation-based assessment. Methods: This is a prospective observational study of simulationbased CVC competency assessment of emergency medicine (EM), general surgery (GS), and internal medicine (IM) residents (postgraduate year 3 to 7). Eight attending physicians from EM and IM developed a standardized CVC insertion manual and nine-point checklist based on previous literatures. Participants were divided into medical specialty (IM) and surgical specialty (EM and GS). Simulation based assessment was based on ultrasound (US)-guided internal jugular venous catheter insertion on a simulator. The score more than 8 points was defined as pass. The primary outcome was the difference in pass rate by specialty. We used Fisher's exact test and Wilcoxon signed rank test for univariate analysis and logistic regression for multivariate analysis. Results: Of 40 residents who participated, 19 were surgical residents (10 GS and 9 EM residents). There were no significant differences in PGY, total numbers of CVC insertion or US-guided CVC insertion between groups. Significant differences were seen in scores and examination pass rates between groups. The median score of medical specialty was 6 (25%-75%IQR, 5-8) versus of surgical specialty was 8 (25%-75%IQR, 7-8; p=0.04). The examination pass rate of medical specialty was 29% versus of surgical specialty was 68% (p=0.03). In multivariable logistic regression analysis, after adjusting postgraduate year and total number of CVC insertions, specialty was independently and significantly associated with examination pass rate (odds ratio=0.11, 95% confidential interval=0.14-0.81; p=0.034). Conclusion: Surgical specialty residents perform better in simulation-based central venous catheter insertion assessment than medical specialty residents. Our study suggests the need of tailored way of CVC procedural training depending on resident specialty. Objectives: To determine the association between specific opiate prescribing patterns on statewide opiate related morbidity and mortality. Methods: This ecological study evaluated an 11-year period in Florida for trends in prescribing patterns of prescription opiates. Information for all oxycodone, hydrocodone, and methadone purchased by hospitals, pharmacies, and practitioners in Florida was obtained from US DEA. Purchases were excluded if they were bought out of state. This information was compared to data obtained from Florida Agency for Health Care Administration for the years 2000-2010 for all prescription opiate related: ED visits, hospitalizations, and newborn withdrawal. Information on opiaterelated deaths was obtained from the state medical examiner. Data for ED visits prior to 2005 and opiate-related deaths prior to 2001 were excluded due to absence of available data. Using Pearson correlation, we analyzed the prescription trends of oxycodone, hydrocodone, and methadone and its relationship to the following endpoints for prescription opiates: newborn drug withdrawal, deaths, ED visits, and hospitalizations. Conclusion: Oxycodone and methadone prescriptions were significantly correlated with increases in newborn drug withdrawal. All three drugs were correlated with increases in prescription opiaterelated deaths and hospitalizations. Additionally, oxycodone was significantly associated with prescription opiate related ED visits. Background: The dramatic rise in prescription opioid overdose deaths in the last decade has been staggering. Studies show that physicians have a racial/ethnic bias against prescribing opioid analgesics to non-white patients and that pharmacies in predominantly non-white neighborhoods have inadequate supplies of opioids. These biases may be disadvantageous to non-Hispanic white patients, reflecting in higher rates of prescription opioid overdose death. Objectives: To investigate racial/ethnic disparities in unintentional prescription overdose death rates in the US, and identify sociodemographic variables associated with overdose death. The five most recent years of the CDC's National Vital Statistics Mortality data (2006) (2007) (2008) (2009) (2010) were used. Our analysis included all reported US deaths for those age 15-64 who had race/ethnicity data recorded (N=3,125,520), of whom 75707 (2.4%) died from prescription opioid overdose. Demographic variables were compared between prescription opioid death vs deaths by all other causes. Logistic regression models were constructed to predict prescription opioid overdose death from race/ethnicity, controlling for year and sociodemographic variables. Results: Prescription opioid overdose deaths increased steadily from 13885 deaths in 2006 (2.24%) to 16683 deaths in 2010 (2.66%). The largest proportion of deaths was among adults between the ages of 25-34, where prescription opioid deaths accounted for 15423 (7.36%) of all deaths. Controlling for age, year of death, sex, and educational attainment, non-Hispanic white individuals had 3.56 (95% CI 3.5, 3.69) times the risk of dying from prescription opioid overdose compared to other races, with the greatest disparities among non-Hispanic white teenagers and young adults aged 15-24 years, who had 5.47 times the risk of dying compared to non-white teenagers and young adults (95% CI 5.03, 5.95). Conclusion: Non-Hispanic white teens and young adults had the greatest risk of dying from prescription opioid overdose compared to any other race/ethnicity or age group. This suggests that certain provider and system-level factors may play a protective role for nonwhite patients. A more detailed understanding of racial/ethnic disparities in prescription opioid overdose death will improve the focus and efficacy of prevention interventions for both providers and patients. Prevalence of Adverse Events From Opiates in the Emergency Department Raoul Daoust 1 , Jean Paquet 2 , Eric Notebaert 2 , Marcel Emond 3 , Gille Lavigne 2 , and Jean-Marc Objectives: To determine the prevalence of adverse events from opiates in the ED. Secondary outcomes are to determine their prevalence by type and route of administration. Methods: We performed post hoc analysis of real-time archived data from a computerized medical prescription and nursing records in an urban teaching hospital. We included all consecutive ED adult patients ( ! 16 years old) who had pain and received opiates from March 2008 to November 2010. Demographic and clinical characteristics were extracted from the electronic chart. Adverse events were defined by the presence of nausea, vomiting, systolic blood pressure (SBP) < 90 mmHg (if the SBP was ! 100 mmHg before the opiate), or saturation (Sat) < 92% (if Sat > 92% before the opiate) during the hour following the administration of the first dose of opiates. We also noted the opiate type and route of administration. We performed descriptive statistics and chi-square tests for association between adverse events, sex, age, and the type or route of administration of medication. Results: During our study period 19,617 patients received opiates; 52.2% of patients were female, the mean age was 54 years (SD AE 20.6), and 33.5% were ! 65 years old. There were 8.2% with adverse effects: 3.9% had nausea or vomiting, 1.5% drop SBP < 90 mmHg, and 3.3% had dropped their Sat < 92%. These adverse events were associated (P 0.01) with: female (more nausea, less sat < 92%), age ! 65 years old (more SBP drop < 90 mmHg, more sat < 92%, less nausea or vomiting), PO (globally less adverse effects; see table), IV fentanyl (less nausea, less Sat < 92%), and IV morphine (less SBP drop < 90 mmHg). Our results are similar to the literature except for IV morphine being associated with less SBP drop. Fentanyl is usually thought to produce less SBP drop. This could be explained by our retrospective design and fentanyl being used for patients at risk of significant SBP drop and morphine being used for more stable patients. When using opiates in the ED we should be aware that PO administration is associated with less adverse effects, age ! 65 years old with more SBP drop and desaturation, that women have more nausea and less desaturation, and that IV fentanyl produces less nausea and desaturation. Objectives: To determine whether prescribing an opiate at ED discharge correlates with increased patient satisfaction. Methods: This was a prospective cohort study at one Level I trauma center and teaching hospital ED. Inclusion criteria: discharged home from the ED, semi-random convenience shifts over a 10 week period, and allowed access to their medical records. Prior to discharge patients were asked to rate their satisfaction with their physicians using four standard Press Ganey questions on a scale of 1 (very poor) to 5 (very good). The four questions were based on: Q1 Courtesy, Q2 Time to listen to you, Q3 Kept you informed, and Q4 Concern for your comfort. Results: Eight of the 12 animals survived the OP exposure. The prepoisoning CO2 response curve for tidal volume (TV) demonstrates an incremental increase during 0-10% of inspired CO2. The pre-poisoning CO2 response curve for respiratory rate (RR) demonstrates a rapid increase from 0 to 4% followed by little change over 6-10% CO2. Animals post-OP demonstrate a more rapid increase in both RR and TV with a steeper slope over 0 to 4% CO2 (p=0.033). Seven of 8 animals demonstrate a greater TV at 10% post-exposure compared to preexposure. Conclusion: Respiratory control during sleep, as measured by CO2 response curves, is altered following OP exposure. Animals that survive an OP exposure demonstrate increased sensitivity to CO2 during sleep. Background: Russell's viper envenomation is a major problem in South Asia. The major envenomation syndrome is venom-induced consumption coagulopathy (VICC). Despite in vitro effects of Russell's viper toxins being well characterised, there is limited information on the dynamics of clotting function in vivo, the relationship of this to venom kinetics, and the effect of antivenom. Objectives: To measure clotting times, factor concentrations, and venom concentrations in Russell's viper envenomations to investigate the kinetics and dynamics of clotting function, including response to antivenom. Methods: Russell's viper envenomations were included from a prospective cohort of snake bite patients in Sri Lanka. Age, sex, type of snake, bite time, clinical effects and treatment were recorded. Serial citrated plasma and serum samples were collected in all patients. International normalised ratio (INR), prothrombin time (PT), activated partial thromboplastin time (aPTT), and coagulation factors I, II, V, VII, VIII, IX and X, were measured. Venom was measured in serum samples by enzyme immunoassay (EIA). Results: There were 147 definite Russell's viper envenomations; median age 39y (16 to 82y), and 111 (76%) were males. All patients had VICC and 70 (48%) had neurotoxicity. The median peak INR was 6.8 (IQR: 4 to >13) which was associated with low fibrinogen (median, 0.1g/L; IQR: 0.02 to 1g/L), low factor V levels (median <5%; IQR: <5 to 5.2%), and low factor VIII levels (median; 24%; IQR: 9 to 42%). There were smaller reductions in factors II, IX, and X. The INR, fibrinogen, and factors V and VIII recovered over 48 hours post-antivenom (figure). Factor VII levels were very high post-bite, median maximum concentration, 300% (IQR: 100 to 645%). The time course of factor VII directly correlated with venom concentrations (figure), demonstrating in vivo activity of the venom. Venom concentrations remained detectable post-antivenom in many cases and there was apparent recurrence 40h post-bite. Persistent venom concentration post-antivenom differed to factor VII, suggesting neutralisation of venom activity, and EIA measuring inactive bound venom. Conclusion: Russell's viper VICC is characterised by low fibrinogen, and low factors V and VIII, which recovered over 48 hours. High factor VII activity before antivenom and its immediate decrease after antivenom, suggests it is a good marker of venom activity. The Impact of the Bugaboo Wild Fires on Regional Emergency Department and Inpatient Visits Glenn R. Gookin 1 , and Josef G. Thundiyil 2 1 University of Central Florida, Orlando, FL; 2 Orlando Regional Medical Center, Orlando, FL Background: Studies have shown an association between particulate matter air pollution (PM 2.5 ) and increased risk of hospital admissions and deaths from cardiopulmonary causes, but studies evaluating the effect of fire-related PM 2.5 on health care utilization are lacking. Objectives: We sought to determine the effect of PM 2.5 from the Bugaboo Florida Wildfires (BFW) of 2007 on regional cardiopulmonary ED visits and hospital admissions. Methods: This ecological study was conducted using data from seven counties of Florida affected by the BWF. PM 2.5 concentrations in these regions were obtained from Florida Division of Air Resource Management and compared for periods during BWF and the equivalent dates one year prior (control group). All cardiorespiratory related ED visits and hospital admissions of patients who presented during the case and control period were obtained from the Florida Agency for Health Care Administration for these seven counties. Data were classified by discharge ICD-9 codes and demographics. Patients were excluded if they were admitted to hospitals outside of Florida or had hospital visits for non-cardiopulmonary reasons. The primary outcome measure was population-adjusted percent differences in cause-specific ED visits and hospital admissions between the BWF and control periods. Analysis was done using z statistics. Conclusion: PM 2.5 concentrations from regional wildfires were responsible for an increased burden of respiratory-related illness on local EDs and hospital admissions but not cardiovascular-related visits. This information may be useful to public health personnel in disaster preparedness and prevention of illness in vulnerable populations. Background: Inhalant abuse is associated with renal, hepatic, and neurologic abnormalities, arrhythmias, and sudden cardiac death, accounting for many emergency department (ED) visits. Various household and industrial products are implicated, including glues, paints, gasoline, and aerosols. Widespread availability of computer dust cleaner (CD), containing difluoroethane, contributes to abuse of these products. Objectives: We aim to analyze inhalant cases reported to United States (US) poison centers to assess for prevalence of CD products compared to other inhalants and to evaluate trends in demographics, disposition and outcomes. Methods: The National Poison Data System (NPDS) is a central repository of US poison center data. We performed retrospective analyses of NPDS data from 2002 to 2008 on product type, basic patient demographics, ED disposition, and outcomes for single-substance inhalant cases. Polysubstance cases were excluded. Proportions outcomes were analyzed via chi-square test, continuous data using Student's t-test, and multiple categorical data via ANOVA. Results: 1814 cases of inhalant abuse were reported, with 1300 single-substance exposures. Use of CD products increased significantly over time, from 31.6% of inhalant cases in 2002 to 83.4% in 2008 (p<0.001). Males (69.8%) were more likely to use inhalants than females (30.2%, p<0.001). Younger patients were more likely to use CD (mean age 20.20 years, SD 8.49) than other hydrocarbon inhalants (mean age 30.67 years, SD 13.88, p<0.001). Patients using CD were less likely to require admission from the ED (8%) than patients using other products (26%, p<0.001). Regardless of product type, patients admitted from the ED were older than those discharged (mean admitted age 29.37 years, SD 12.24; mean discharged age 21.77 years, SD 10.49, p<0.001). There was no difference in outcome between patients who used CD and other products (p=0.078). However, patients with major effects or death were older than patients with less severe effects (no effect to moderate effect: mean age 22.53 years, SD 10.85; major effect or death: mean age 27.71 years, SD 12.89, p<0.001). Conclusion: Use of CD products for inhalant abuse is increasing and particularly prevalent among younger patients. Patients who use inhalants are more likely to require hospital admission and experience greater morbidity and mortality with increasing age. Background: Acetaminophen (APAP) is the most commonly ingested and highest morbidity pharmaceutical taken in overdose. The availability of 20-hour intravenous N-acetylcysteine (NAC) infusion for treating acute, low-risk APAP overdose enabled our center to implement an emergency department observation unit (EDOU) treatment algorithm as an alternative to hospitalization. Objectives: To evaluate the utilization and performance of our early experience with EDOU treatment of APAP overdose. Methods: This retrospective cohort study included all patients treated for acute, low-risk APAP overdose in our academic hospital between September 2006 and July 2011. Cases were identified using EDOU records, ICD-9 codes, and pharmacy records. Data were abstracted from clinical records using a priori definitions and a standardized case report form. Successful EDOU discharge was defined as discharge home or to a psychiatric facility with no inpatient admission. Differences in medians with 95% confidence intervals and Mann-Whitney U tests were used for comparisons. Results: 196 patients received NAC for APAP overdose with a mean age of 35 years (SD 14); 73% were white, and 43% were male. 20 (10%) received care in the EDOU. Three of 20 (15%) met objective EDOU admission criteria and 13/20 (65%) were discharged successfully. None of the seven observed patients who ultimately required hospitalization met protocol criteria for treatment in the EDOU. There were 10/196 patients who met criteria for observation but instead received care in the inpatient setting. The median total length of stay (including length of stay in the ED, plus length of stay for any subsequent inpatient and psychiatric treatment) for patients admitted to the EDOU, was 41 hours compared to 68 hours for patients who met inclusion criteria but were admitted to inpatient services (difference 27 h, 95% CI 18-72 h; p=0.131). Conclusion: ED-based observation for APAP overdose can be a viable alternative to inpatient admission. Most patients were successfully discharged from the EDOU. None of the patients failing EDOU treatment met criteria for observation. This evaluation identified both over-and under-utilization. EDOU resulted in a shorter length of stay than admission to an inpatient setting, although the difference was not statistically significant in this small sample. Objectives: To identify associations between clinical variables and the clinical decision to obtain an EG level or administer an alcohol dehydrogenase blocker (ADHBA). We also examined the associations between these variables and having a non-zero EG level. Methods: Retrospective cohort study of 230 patients from a single Poison Center that serves five states with rural and urban areas. Cases were ingestions reported between 2000-2010 who were evaluated at health care facilities. Cases were randomly selected and abstracted by a single reviewer using a standardized form. Independent variables included: patient characteristics, state, reason for ingestion (suicidal vs other) and reported amount ingested. Dependent variables included EG level measured, ADHB administered and having an EG level >0. Associations were determined with chi-square or logistic regression, and quantified with relative risks (RR). Results: The variables associated with increased probability of measuring EG levels were: suicidal intent, state, non-pediatric, estimated ingested amount, and ethanol co-ingestion. The variables positively associated with ADHBA were: suicidal intent, state, non-S334 2013 SAEM abstracts pediatric, male, estimated amount ingested, and ethanol co-ingestion. Among patients who had EG levels measured, the only variable associated with having EG level >0 was suicidal intent (RR=1.4, 95% CI 1.01 to 1.98). Conclusion: While several characteristics were associated with measuring EG levels and ADHBA, only suicidal intent was weakly associated with having an EG level >0. While these results are potentially limited by retrospective data collection, selection bias, and use of a single poison center's data, our findings suggest that characteristics clinicians commonly use to risk-stratify patients with a history of EG exposure have little validity. The Objectives: Determine if CYP2D6 medication co-ingestion decreases the efficacy of hydrocodone. Methods: A convenience sampled cohort of subjects in a university ED were prospectively enrolled between June 1st, 2012 and October 31st, 2012. Subjects were included if they had self-reported pain or nausea. Patients were excluded if they were unable to speak English, were less than 18 years of age, or carried a diagnosis of chronic pain or cyclic vomiting. Medication histories for the preceding 48 hours prior to ED visit were obtained. Pain and nausea were quantified by visual analogue scale (VAS) at baseline and between 30 minutes and 90 minutes following administration of hydrocodone, ondansetron, or oxycodone. Descriptive statistics were used to characterize the incidence of CYP2D6 interactions. Wilcoxon rank sum was used to analyze VAS changes when subjects were taking zero or ! 1 CYP2D6 dependent medications. This represents an interim analysis of this dataset. Results: 333/446 were consented (74.7% consent rate). The age range was 18-89 years (median 39, IQR: 27, 52) and 36.0% were male. 153 received hydrocodone (n=41, 12.3%), oxycodone (n=77, 23.1%), or ondansetron (n=74, 22.2%) during their ED visits. 45.59% of patients were taking ! 1 CYP2D6-dependent medication, and 19.52% of patients were taking ! 2 CYP2D6-dependent medications. In patients taking ! 1 CYP2D6 medications there was a trend toward decreased efficacy of hydrocodone. Decreased oxycodone efficacy was observed in this group though the effect was less than that observed with hydrocodone. There was a trend toward increased efficacy of ondansetron in this group. See the table for VAS data. Conclusion: Interim analysis suggests that CYP2D6 inhibition may decrease the efficacy of medications that require activation by CYP2D6 and may increase the efficacy of medications inactivated by the enzyme. Further enrollment is necessary to confirm these trends. The utility of subjective report in pediatric asthma encounters has not been evaluated. Objectives: Determine if pediatric asthma exacerbation selfassessments correlate with validated objective respiratory scoring and are consistent with physician disposition assessments. We conducted a prospective observational study of pediatric ED asthma encounters in children aged 5-17 presenting from June 2011-November 2012 in our urban academic center. Encounters were identified by trained research assistants monitoring our ED tracking system 16h/day, 7d/week on a university-based calendar. Patients completed before-after six-point Likert smile-scales and were asked how they felt after initial treatments. Physicians blinded to subject selfassessment measured after-treatment severity using the validated Pediatric Asthma Severity Score (PASS) instrument (score 0-6). Primary outcome was the difference in PASS score between those with substantial self-assessed improvement and those with less or no improvement. Demographic variables were recorded. Analyses were performed to ensure that distribution assumptions were met; t-tests, chi-square, and correlation coefficients were used as appropriate. Negative binomial regression was used to test whether subjects feeling "a lot" better scored lower on post-treatment PASS testing compared to others. Results: 203 subjects (59% male, mean age 9.8) were enrolled. 89% reported feeling better after initial treatment: 40% "a lot", 60% "a little" or no improvement. On objective respiratory assessment using PASS scores, subjects feeling "a lot" better were found to be 2.1 points lower than those feeling "a little" or no better (95%CI: 1.1-3.8). Following initial treatment and independent recording of patient self-assessment, physicians planned to discharge 81% of those feeling "a lot" better and 52% of those feeling "a little" better (p<0.0001), suggesting a strong and significant association between physician disposition plan and selfassessment. Conclusion: Pediatric asthma self-report of feeling "a lot" better is associated with both objective respiratory improvement and physician assessment of potential safety for discharge. 3 and oxygen saturation (O2). The main outcome was death at any point during hospitalization Of the 1088 patients, 1048 (96%) patients survived to hospital discharge and 40 (4%) did not survive. The mean age was 54 years (SD19), 598 (55%) were male, 132 (12%) were trauma-related, and 114 (11%) were admitted to the ICU. The area under the ROC curve for Abnormal Prehospital End-Tidal Carbon Dioxide Levels Are Associated with a Diagnosis of an Acute ST-segment Elevation Myocardial Infarct in the Emergency Department Christopher L. Hunter, Salvatore Silvestri The Association Between Timely Percutaneous Coronary Intervention for ST-Elevation Myocardial Infarction and Emergency Department Crowding Christiana Care Health System Background: Performance of percutaneous coronary intervention 4% (231/328) of total patients screened were female. 50/313 (16%) which made up the study sample tested positive for either GC, chlamydia, or both. 84% (42/ 50) of positive tests were female: 32 chlamydia,3 GC, and 7 with both. 8/ 50 (16%) of positive tests were male: 7 chlamydia, and 1 GC. No patients over 22 y/o (n=13) screened had a positive test. The mean asymptomatic rate of all patients 14-24 y/o was 12 Management Practices for Febrile Neonates in US Pediatric Emergency Departments Shabnam Jain Children's Hospital of Philadelphia Neonates 0-28 days of age are considered at high risk for serious bacterial infections (SBI). Blood, urine, and cerebrospinal fluid (CSF) cultures and admission for antibiotics are considered standard management of febrile neonates We analyzed performance of laboratory testing and patterns of antibiotic use by disposition. Results: 2379 infants met study criteria. 371 (15.6%) infants were evaluated and discharged from the ED; 2008 (84.4%) were admitted to the hospital 3%), practiced in an urban setting (79.5%) and in a children's hospital (71.8%). CDC consent requirements were correctly recognized by 39%, while 28.3% correctly identified screening populations per guidelines. Many reported knowledge of state consent procedures (68.1%); agreement was poor with actual state recommendations Each of these factors was also associated with agreement for universal adolescent ED-based HIV screening. Most frequently cited barriers to ED-based HIV testing were concerns of caretaker presence (67.4%), follow-up (67%), and cost-efficacy (65.5%). Though 84.4% of ED physicians agreed universal screening would increase access to HIV testing, only 55.8% felt it was their responsibility to test. Conclusion: ED clinicians exhibit poor knowledge of HIV screening recommendations for adolescents and barriers are numerous No Man Is An Island: Living in a More Disadvantaged Neighborhood Increases the Likelihood of Developing Persistent Moderate or Severe Neck Pain 6 Weeks After Motor Vehicle Collision Jacob Ulirsch 1 In cross-sectional studies, reduced neighborhood addresses were geocoded and matched with 2006-2010 American Community Survey (ACS) data. Socioeconomic placement (SEP), a well-validated aggregate measure of nSES, was calculated using ACS data and split into quartiles representing low, low-middle, high-middle, and high nSES CA impression of the patient's haemoglobin level. A second physician then provided a blinded clinical impression of the patient's hemoglobin level. Descriptive statistics, Kendall's tau, and the weighted Cohen's kappa are reported. Results: We enrolled 216 patients. Complete data were available for 210 (97%), 59% were male, and the median age was 30 years. The range of measured hemoglobin values was 1.5-15.4 g/dL By the hemoglobin level, anemia was classified as absent in 30 (14%), mild in 35 (17%), moderate in 62 (30%), and severe in 83 (40%). The treating doctor estimated the anemia to be mild or absent in 74 (35%), moderate in 72 (34%), and severe in 64 (30%). The doctors' gestalt estimates of the severity of anemia was significantly correlated with the corresponding laboratory hemoglobin measurements Conclusion: Many patients presenting to this ED had elevated RBS or A1c. The use of POC A1c allows for rapid testing of the non-fasting patient, although confirmatory testing is required. Our findings suggest that RBS may also identify patients at high risk for DM. The high prevalence of abnormal RBS and A1c in this developing country suggests that the ED may be a useful site for DM screening Clinical Characteristics of Patients: Female, no. (%) Characteristics of Adult Patients Presenting to Two Public Referral Hospitals in Cambodia Lily Yan 1 , Mackensie Yore 1 , Elizabeth Pirrotta 1 , Koy Somontha 2 , Yim Sovannra 2 , Erika Cornell 1 , Maya Raman 1 Of the 44 different presenting complaints, fever (62%), respiratory problems (25%), and skin complaints (24%) were most common. Symptoms were acute (14 d) in 6%. In visits with recorded vital signs (639, 74%), 18% of patients had fever (T>38°C), and 47% had at least one other abnormal vital sign. Patient disposition included admission (51%), discharge (47%), and transfer to another facility (1%). Seven patients (0.8% of visits) died within 14 days of initial presentation. For visits with vital signs recorded, predictors of admission by multivariate logistic regression included abnormal vital signs Ambulance arrival was positively associated with a higher triage score (B = 0.12, S.E. 0.05). Patients referred from other facilities were almost twice as likely (OR 1.92) to arrive at the KATH AEC via ambulance than those not referred. Patients with injuries and higher acuities patients were more likely to be transported to KATH by ambulance (OR 1.77 and 1.20 respectively). All results are highly statistically significant. Conclusion: Although a minority of patients were Motor Vehicle Crash Patients: An International Comparison China vs. The US Paul Ko 1 Zhejiang Provincial People's Hospital ) is a Level I Trauma Center, with over 60,000 annual ED visits. The same structured data collection form (translated into Chinese) was used at both hospitals. Patients above age 18 presenting to the ED after MVCs were eligible for inclusion. A convenience sample was undertaken There were no significant differences in admission rates, ICU admissions, or deaths, and only a slightly higher number of patients going to the OR in China. Results: During the four-year study period the number of hospital admissions increased from 26,244 in 2008 to 30 8%) in 2010. The Casualty Room mortality rate was 0.30% (95% CI 0.24-0.37%) in 2008 and 0.39% (95% CI 0.32-0.47%) in 2009. The ED mortality rate was 0.9% (95% CI 0.79-1.0%) in 2010 1%) in 2011. six percent of participants living in rural areas cited at least one barrier to care, in contrast to 44% of participants living in urban/suburban districts. The most common CVD diagnoses were hypertension (62%), congestive heart failure (23%), and stroke (19%). The most common CVD risk factors were family history of CVD (34%) and diabetes (25%) Trained staff collected data on injured patients 24 hours/day including: reported circumstances of trauma, transport method and time, injury type and location, vital signs on arrival, and disposition. For admitted patients, length of stay, use of HIV testing, operative procedures, use of blood products, and 30-day vital status were also recorded. Results: Of the 3498 patients enrolled in the registry, the majority were male (71.6%) with a bimodal age distribution (< 12y, 25.9%; 21-50y, 50.7%). The majority arrived by private vehicle (52.8%) or public transportation (37.9%). Over 62% arrived within 6 hours of injury. Falls (26.8%), road traffic injury (24.9%), and assault (20.2%) were the most common presenting complaints with varying patterns by age. 1678 (48.9%) were admitted to the hospital. Hospital resource utilization data was available for 863 (51.4%). Of these, 661 (77.4%) had x-rays, 390 (45.3%) underwent HIV testing, 50 (5.8%) received blood products, and 468 (57.5%) had surgical procedures performed. The overall case fatality rate was 3%. Conclusion: Injuries result in a substantial burden of disease in Zambia Lactate Clearance is Associated with Improved Survival and Neurological Outcome in Post-Cardiac Arrest Michael Donnino 1 , Lars W. Andersen 2 , Tyer Giberson 1 values in survivors than non-survivors (242 vs. 187 mmHg; p=0.03). When controlling for initial rhythm and sex, initial hyperoxia (> 300 mmHg) was associated with improved survival (OR 3.30, 95% CI 1.15-9.44, p=0.03). There was no relation between oxygen values at any point post-arrest and neurologic outcomes Mean downtime for survivors with good outcomes was 21.5 +/-13.7 mins vs. 32.6 +/-21.7 mins (p=0.0036) for those with poor outcomes. Mean pre-induction time for good versus poor outcome was 144.5 +/-205.4 mins and 145.5 +/-171.1 mins, respectively (p=0.98). Mean induction time for good outcomes was 270.8 +/-160.8 mins and for bad outcomes was 252.3 +/-151.3 mins (p=0.479). When adjusting for sex vs 76 out of 203 patients (37%), p=0.24] in patients with CT done before ICU admission vs those not emergently CT imaged, respectively. Conclusion: Obtaining CT prior to ICU admission did not delay 6; teach = 46.8AE28.8) and smallest for the rescue airway module (efficiency = 11.7AE20.3; safely = 11.3AE19.7; teach = 12.8AE18.7). Previous experience with the procedure did not affect preand post-training score improvement 17% (95%CI 11-24) for the pulseless shockable arrest, 6% (95%CI 0.1-12) for the dysrhythmias, 12% (95%CI 4.3-21) for the respiratory scenario, and 14% (95%CI 7.3-21) for the shock scenarios. Inter-rater reliability was excellent as demonstrated by an overall intraclass correlation coefficient of 0 Conclusion: Among frequent ED users, occasional and frequent psychiatric users have a lower frequencies of comorbid medical issues; however, one in four frequent psychiatric users has significant concurrent medical conditions. Asthma Exacerbations in Japan: Conclusion: A community-based participatory approach to strategically placing AEDs is feasible. Further research will need to be conducted to assess the frequency of use of strategically placed AEDs. Objectives: To assess the association between prehospital end-tidal carbon dioxide (ETCO2) and the presence of an acute STEMI diagnosed in the emergency department (ED) in patients transported by EMS Methods: We conducted a retrospective cohort study among patients transported to Orlando Regional Medical Center by EMS during a 1-year period. Records were linked by manual archiving of EMS and hospital data. We evaluated initial out-of hospital vital signs including ETCO2, respiratory rate (RR), systolic BP (SBP), diastolic BP (DBP), pulse (P), and oxygen saturation (O2). The main outcome was the presence of an acute STEMI in the ED.Results: There were 1328 out-of-hospital records reviewed. Hospital data and all six prehospital vital signs were available in 1088 patients. Methods: EMS providers in charge of care for injured children ( 15 years) transported to pediatric trauma centers in three mid-sized cities were interviewed immediately after completing transport. Patients were included regardless of injury severity. The interview included patient demographics and the Field Triage Guidelines criteria -physiologic status, anatomic injury, and mechanism of injury. Included patients were followed through hospital discharge. The 1999 The , 2006 Guidelines were each retrospectively applied to the collected data.Children were considered to have needed a trauma center if they had non-orthopedic surgery within 24 hours, ICU admission, or died. Data were analyzed using descriptive statistics.Objectives: We sought to determine the accuracy with which EPs correctly interpret a patient's wishes from his or he DNR order.Methods: In this prospective cohort study, an in-person survey was given to nine actual inpatients with DNR orders asking them dichotomous questions about which treatments or interventions they would want if critically ill. All patients were queried through an electronic medical record at a single tertiary referral center. Patients were excluded if they did not have normal mental status or a health care surrogate. Six of these patients were turned into clinical scenarios. These scenarios were incorporated into a survey asking EPs whether they would provide specific interventions for each patient given the DNR status. The treatment and interventions included: chest compressions, vasopressors, antiarrhythmics, ICU admission, central line, blood transfusion, invasive ventilation (ETT) for respiratory distress, ETT for respiratory arrest, noninvasive ventilation, defibrillation, cardioversion, and invasive surgical procedures. The survey was administered via Survey Monkeyâ to a convenience sample of EPs across the US who were blinded to the patients' actual wishes. The primary outcome measure was the level of agreement between patients' and EPs' interpretation of which interventions should be given. Fisher's exact test was used to analyze for differences between EPs and patients.Results: Of the six patients, ages ranged from 30-86, two were female, three had terminal cancer, and two needed health care surrogates to answer questions. Two hundred thirty EPs across 17 states initiated the survey: 65.8% were male, 18% were residents, 75.5% were board certified, and average length of practice was 13.5 years.Interventions that patients with DNR were least likely to want included: defibrillation (17%), chest compressions (33%), cardioversion (50%), and ETT for arrest (50%). The significant discrepancies between patient and physician expectations were for: ETT without arrest (p=0.0001), ETT with arrest (p=0.0001), chest compressions (p=0.0001), vasopressors (p=0.009), and ICU admission (p=0.014). wishes often do not agree. Objectives: Our objectives were to assess the proportion of older adults who require contrast-enhanced ab/p CTs in the ED within 24 hours after non-contrast CTs, and to assess the role of contrastenhanced CTs to alter the diagnosis and care of patients after noncontrast CTs.Methods: This was a retrospective study of emergency department (ED) patients ! 65 years who received ab/p CTs between March 2010 and March 2012. We analyzed ab/p CTs to determine the type of scan performed and if repeat CTs were performed within 24 hours in the ED. We completed a chart review of patients who received contrastenhanced CTs within 24 hours of the initial non-contrast CTs to identify the scan results and clinical management. We present proportions, means, medians, and 95% confidence intervals (CI).Results: Our community teaching hospital ED performed 3227 ab/p CTs on patients aged ! 65 years during the study period. This represented 2718 patients and 2216 initial non-contrast CTs. Less than 1% (18/2718, CI 0.4-1%) of patients received repeat ab/p CTs within 24 hours in the ED. Non-contrast CTs were initially performed on 72% (13/ 18, CI 47-90%) of these patients, while 28% (5/18 CI 10-53%) had initial contrast-enhanced CTs. Only 0.5% (12/2216, CI 0.3-0.9%) of the initial non-contrast CTs were followed by repeat contrast-enhanced studies within 24 hours. The median age of these patients was 78 years. The mean and median time elapsed between CTs was 255 and 201 minutes, respectively. Forty-two percent (5/12, CI 15-72%) of repeat CTs increased the level of certainty and 33% (4/12 CI 10-65%) of repeat CTs changed the diagnosis. Thirty-three percent (4/12, CI 10-65%) of patients underwent surgery and 50% (6/12, CI 21-79%) received antibiotics following the second CT.Conclusion: Over 99% of older ED patients were initially evaluated with non-contrast CTs and did not need subsequent contrast-enhanced CTs in the ED. This finding suggests the adequacy of non-contrast CT in the assessment of undifferentiated abdominal pain in older adults. In the 0.5% percent of patients who required second CTs with the addition of contrast, the repeat imaging did improve the level of certainty or diagnosis.Conclusion: Our findings do not support the idea that an increased number of CT scans is associated with decreased mortality in adult blunt trauma patients. In view of the negative effects of increased scanning, this practice should be evaluated more carefully. Objectives: To determine failure rate and safety of 8 cm catheters at the traditional location of the second intercostal space mid-clavicular line (MCL) and at the fourth intercostal space anterior axillary line (AAL).Methods: Radiographic analysis of chest computed tomography of 100 consecutive trauma patients greater than 18 years of age at an urban Level I trauma center from January to June 2011. Chest wall thickness (CWT) and depth to vital structure (VS) at MCL and AAL were measured. Measurements at AAL were obtained based on two approaches: closest vital structure from skin surface regardless of angle of entry (AAL-close) and perpendicular to the chest wall (AALperpendicular). Angle of entry from horizontal (AOE) was obtained forObjectives: To measure performance of a hybrid natural language processing (NLP) and machine learning system for automated outcome classification of ED CT imaging reports. Our hypothesis is that such a system is comparable to medical personnel. We performed secondary analysis of a prior diagnostic imaging study on 3,710 blunt facial trauma victims. Staff radiologists dictated CT reports as free text, which were then de-identified. A trained data abstractor manually coded the reference standard outcome of acute orbital fracture, with a random subset double-coded for reliability. The dataset was randomly split evenly for training and testing. We used training patient reports as input to the Medical Language Extraction and Encoding (MedLEE) NLP tool to create structured output containing standardized medical terms and modifiers for certainty and temporal status. Findings were filtered for low certainty and past/future modifiers, and then combined with the manual reference standard to generate decision tree classifiers using data mining tools WEKA 3.7.5 and Salford Predictive Miner 6.6. Performance of decision tree classifiers was evaluated on the testing patient reports. Objectives: To elucidate the underlying protective mechanisms, we examined the effects of TSN-SS on HMGB1-induced release of various cytokines/chemokines, and evaluated its therapeutic potential in an animal model of lethal endotoxemia following administration via a clinically feasible route. Methods: Murine macrophage-like RAW 264.7 cells or human monocyte U937 cells were stimulated with recombinant HMGB1 (2-4 lg/ ml) in the absence or presence of TSN-SS (100 lM) for 16 hours, and levels of various cytokines and chemokines in the culture medium were determined using the Cytokine Antibody Arrays. Male Balb/C mice (20-25 g, 6-7 weeks old) were subjected to lethal endotoxemia, and TSN-SS was given intravenously to evaluate its long-term (two weeks) therapeutic efficacy.Results: In vitro, TSN-SS effectively inhibited HMGB1-induced release of IL-8 and MCP-1 (by > 80%) in U937 monocytes, and completely prevented HMGB1-mediated release of IL-6 and RANTES in RAW 264.7 macrophages. In vivo, intravenous administration of TSN-SS at 0.5 h, 24 h, and 48 h post endotoxemia significantly increased animal survival rate from 14% (in control group receiving LPS + saline, N = 28 mice/group) to 33% (in experimental group receiving LPS + 5.0 mg/kg TSN-SS, N = 30 mice/group), and to 50% (in experimental group receiving LPS + 15 mg/kg TSN-SS, N= 20, P < 0.05).Conclusion: TSN-SS effectively inhibited HMGB1-induced release of chemokines in macrophages/monocytes in a dose-dependent manner, and protected animals against lethal endotoxemia when given via a clinically feasible route. Objectives: Our goal was to determine the relationship between overall ED crowding and efficiency and performance on pneumonia quality measures.Methods: Quality measure performance data from 10/1/2010 to 9/30/ 2011 was obtained from CMS through data.medicare.gov. Measures specific to pneumonia include the percentage of patients receiving initial antibiotics in compliance with CMS guidelines, and the percentage of blood cultures obtained before antibiotic administration. We grouped hospitals based on the performance standard (equal to the 50th percentile for each measure) and the benchmark standard (mean performance among the top decile of hospitals) defined by CMS for each measure. We then linked these data to ED data from the Emergency Department Benchmarking Alliance's 2011 survey. ED crowding metrics were described for hospitals grouped by quality measure performance. Performance differences across groups were assessed with the Kruskal-Wallis test.Results: Data comprised 427 hospitals. Mean compliance with initial antibiotic guidelines was 96%. Mean compliance with blood culture timing was 97%. ED crowding and efficiency metrics consistently differed across hospitals stratified by quality measure performance. Median ED length of stay was 160 minutes among hospitals meeting the antibiotic selection benchmark standard, compared with 197 minutes among those below the performance standard (p = 0.003). Similarly, median length of stay was 160 minutes among hospitals meeting the blood culture benchmark standard, compared with 186 minutes for those failing to meet the performance standard. Objectives: To determine rates of asymptomatic GC and/or chlamydia infection in an urban PED in Newark, NJ.Methods: Prospective enrollment of a convenience sample of asymptomatic male and female patients presenting between January 2011 and September 2012. Inclusion Criteria: age 14-24 y/o, medically stable, absence of GU and abdominal symptoms. Exclusion Criteria: medically unstable, patient refusal, recent testing or treatment by report, need for urine self-catheterization. The New Jersey Department of Health and Senior Services provided funding for urine screening kits and testing in conjunction with the CDC and the Infertility Prevention Project. Pediatric Emergency Medicine fellows, attending physicians, or trained research volunteers screened patients who were given specific written follow-up instructions to obtain test results and if needed, receive free treatment and partner notification and treatment. Conclusion: In populations with increased risk for GC and/or chlamydia infection and high rates of symptomatic disease, ED screening of asymptomatic patients reveals a concomitantly high rate of asymptomatic disease. Routine urine screening for asymptomatic GC and/or chlamydia infection within the ED should be considered and high follow-up rates support this screening. In 2010, a majority of febrile neonates were admitted to the hospital. ED management varied by disposition. The majority of admitted neonates underwent the full-recommended evaluation for fever and almost all received antibiotics. The majority of discharged patients received neither diagnostic testing nor presumptive antibiotics. Results: There were 4,513 observations for analysis, representing a nationally weighted population of 19,312 non-trauma center ED encounters for major trauma. Mean age was 44 and the mean ISS was 19; 55% (95% CI 51-58%) were admitted at the non-trauma center. The adjusted absolute risk of admission vs. transfer was 12% higher (95% CI 8-17%) for patients with government insurance and 14% higher (95% CI 9-20%) for patients with private insurance relative to patients who were uninsured. The risk of admission vs. transfer was also 25% higher (95% CI 14-35%) if presenting to an urban teaching hospital vs. an urban non-teaching hospital, and 4% higher (95% CI 2-6%) for every additional 10,000 annual ED visits.Conclusion: Despite adjustment for patient, injury, and hospital level characteristics, insured patients and those with initial care in higher-volume urban teaching hospitals had increased risk of hospitalization in a non-trauma center rather than transfer to a potentially higher level of care. through which employee productivity improves by virtue of their awareness that their work is being observed. Off-service residents rotating in the emergency department (ED) are perceived to be less productive than their emergency medicine (EM) resident counterparts. Reasons could be attributed to unfamiliarity with the ED and the specialty, or general lack of motivation and interest. The use of the Hawthorne Effect has not been previously studied with off-service residents in the ED.Objectives: We sought to determine whether the Hawthorne Effect could be used to increase the productivity of off-service residents rotating in the ED by requiring the rotating residents to complete shift cards for each of their shifts. The cards were reviewed and signed by senior EM residents or faculty members.Methods: This is a prospective cohort study conducted at an urban, tertiary, Level I trauma center from January 2012 to December 2012. We implemented the use of shift cards for off-service residents during their EM rotations. Off-service residents were required to complete shift cards following each shift. Completion of the shift card involved recording patients seen and their dispositions, procedures done, and documenting a learned bedside teaching point from the shift that day. At the end of the shift, a senior EM resident or attending signed the shift card and provided feedback. Productivity was measured in terms of patients seen per hour (PPH) and total relative value units per hour (RVU/h). Data were analyzed using Student's t-test and analysis of variance to compare pre-and post-intervention data of off-service residents and their counterpart EM residents.Results: Off-service residents showed a productivity of 0.562AE0. (Figures 1 and 2 ). The use of shift cards is a tool that can be used to foster motivation via the Hawthorne Effect for off-service residents rotating in the ED, and is a simple and cost-effective method to improve system-based practices and utilization of resources. Conclusion: Doctors' estimates of the severity of anemia were significantly correlated with laboratory hemoglobin measurements. Sensitivity of the gestalt estimate for severe anemia was moderate.Inter-observer agreement was excellent. The Background: Emergency medicine is in its nascent stages in many developing nations, including Cambodia. While accurate epidemiologic data describing Cambodian patients with emergency medical conditions is currently lacking, it is essential for the development of effective health care infrastructure and emergency medical training programs.Objectives: Describe the characteristics and outcomes of adult patients seeking immediate medical care at Cambodian public hospitals.Methods: This prospective, observational study enrolled a convenience sample of all adult ( ! 18 y/o) patients presenting without appointments to two Cambodian public referral hospitals for 23 consecutive weekdays over 5 weeks (Jul-Aug, 2012). Real-time clinical and demographic data were collected from patients, hospital staff, and medical records using a standardized survey. Follow-up information was obtained in person and by telephone at 2 and 14 days. Multivariate logistic regression was performed to determine factors associated with hospital admission.Results: 1295 visits were enrolled with 2-and 14-day follow-up rates of 83% and 75%, respectively. Mean age was 42 years (SD 17) with 63% of visits by females. In 45% of visits, low-income insurance assistance was provided. Most arrived by motorbike (57%), Tuk Tuk (25%), or ambulance (9%), traveling an average distance of 28 km (SD 33). Transfers accounted for 25% of cases. The top three chief complaints were abdominal pain (36%), respiratory problems (15%), and headache (13%). In visits with vital signs taken (81%), abnormal vital signs, excluding temperature, were present in 22% on arrival. 63% of visits were admitted and 15% underwent surgery. Predictors of admission by multivariate logistic regression were symptom onset 3 days (OR 5.4, 3.7-7.8), abnormal vital signs (OR 3.3, 2.1-5.0), complaint of fever (3.2, 1.6-6.2), male sex (OR 2.6, 1.8-3.8), age ! 65 (OR 2.5, 1.5-4.1), abnormal temperature (OR 1.9, 1.3-2.9), and time to hospital (OR 1.6, 1. Background: Injury is a major public health problem worldwideespecially in low-and middle-income countries (LMIC) where 90% of injury deaths occur. In limited-resource settings, identifying patients requiring additional resources is essential. A variety of injury scores are used, many of which require advanced testing not available in LMICs. The KTS II has been advanced as a low-resource method to assess injury severity. It predicts mortality but not need for admission when applied to trauma patients in Uganda.Objectives: To assess the ability of KTS II to identify patients at risk of death or in need of admission, transfusion, and surgery at the University Teaching Hospital (UTH), a large urban hospital serving 1.5 million people in Lusaka, Zambia. September to February 2012 of patients presenting with injuries to UTH. Data were collected on injured patients 24 hours/day including circumstances of trauma, transport method and time, injury type and location, vital signs on arrival, and disposition. A KTS II score (min 0max 10) was calculated on patient arrival. Disposition, length of stay, operations, use of blood products, and vital status were recorded. Conclusion: The KTS II injury severity score identified patients at increased risk of death. There was a significant difference between admitted patients and those who required additional hospital resources and those who did not. These minimal differences were unlikely to be clinically relevant. Additional work is needed to identify an injury severity score that can guide resource allocation in resource-limited settings. Objectives: This study reports the trend in ED and hospital mortality associated with the opening of new a full capacity ED in a tertiary level hospital in Sub Saharan Africa. Methods: This was a retrospective study of the ED and hospital mortality rates for adult and pediatric patients admitted to Muhimbili National Hospital in Dar es Salaam, Tanzania from 1/2008 to 12/2011. This period represents two years before and two years after the opening of the full capacity ED in January 2010. We excluded neonatal and obstetric mortality as these patients are not admitted through the ED. Trained abstractors analyzed patient attendance registers, nurses' report books, hospital executive report books, and death certificates books, and recorded data on a standardized collection form. The 2008 and 2009 data are from the limited capacity Casualty Room (precursor of the ED) and for 2010 and 2011 they are from the ED. Data are presented as proportions or differences with 95% CIs. The opening of a full capacity emergency department in a tertiary level hospital in Sub-Saharan Africa was associated with significant decrease in hospital mortality during the first year of operation. This decrease in mortality was maintained in the second year. This is despite a small, but significant, increase in the mortality rate in the ED as compared to the Casualty Room that it replaced. Objectives: To create and evaluate a curriculum, applicable to any GH rotation, that requires students to take an active role in their education and promotes engagement with the local medical community, health care delivery system, and colleagues. We hypothesize that student learning can be directed to standardize the GH curriculum.Methods: Prospective, observational, mixed method (quantitative & qualitative) study of fourth year students enrolled in GH courses at UCLA in 2011-12. Course directors identified four topics common to all rotations (traditional medicine, health systems, limited resources, pathology) and developed activities for students to complete abroad: observation, interview and reflection on resources, pathology, and medical practices; and compare/contrast their experience in the US health care system. Students posted responses on a discussion board moderated by faculty in the US. After the rotation, students completed an anonymous internet-based evaluative survey consisting of five-point Likert scale items and free text response questions. Responses were tabulated. Qualitative data were analyzed using grounded theory with identification and coding of themes.Results: Fourteen students enrolled in GH electives and completed the activated learning assignment. Twelve submitted the post-rotation survey (85.7%). Activated learning enhanced GH education for 67% and facilitated engagement in the local medical culture for 67%. Qualitative analysis revealed five major themes supporting activated learning: guided learning, stimulation of discussion, shared interactions, cultural understanding, and knowledge of global health care systems. One major theme emerged for future improvement: increased interactivity.Conclusion: Our activated learning program directed student education on GH rotations. The intervention standardized the curriculum and promoted engagement in local medical culture, pathology and delivery systems. Increased interaction between students at different sites and US-based faculty may augment the effect of activated learning. Objectives: This study's primary objective was to further characterize Kenya's CVD epidemic by assessing the influence of socioeconomics on access to care among patients with CVD presenting to the emergency department (ED) of Kenya's National Teaching and Referral Hospital.Methods: In this cross-sectional observational study, physicianadministered questionnaires were used to collect clinical, demographic, and health care utilization data from 112 patients with CVD presenting to Kenyatta National Hospital's ED in Nairobi, Kenya. Enrolled patients were 18-89 years old with new or previous diagnoses of hypertension, heart disease, stroke, and/or deep vein thrombosis. Access to care was assessed by patients' self-reported access to regular sources of medical care, barriers to care, access to CVD medications, years since CVD diagnosis, and household income/location. This study received IRB approval and verbal informed consent was obtained from all participants.Results: Of the patients interviewed, 27% had new-onset CVD with an average age of 48 years. Seventy-three percent had previous CVD diagnoses with an average age of 50.5 years and an average time of 7.6 years since diagnosis. Of patients with new-onset CVD, 47% had regular sources of medical care and 67% cited at least one barrier to care, particularly cost and transportation. In contrast, 86% of patients with known CVD had regular access to care, with 74% reporting at least one barrier. Background: Injury is a major public health problem worldwide.The problem is especially acute in low-and middle-income countries (LMIC) where over 90% of the world's deaths from injury occur. In these countries where EMS care, accident reports, and post-mortems are rare or non-existent, trauma registries are essential for injury research, injury prevention, and quality improvement of injury care. There are limited data available for most LMICs such as Zambia.Objectives: To develop a hospital-based, minimal data set trauma registry in a large urban tertiary hospital in Lusaka, Zambia to assess the causes of trauma, patterns of injury, transport methods and duration, mortality, and hospital resource utilization.Background: Previous single-center retrospective studies demonstrated an association between lactate clearance and in-hospital mortality in post-cardiac arrest patients. These findings have yet to be validated in a multicenter study.Objectives: To determine the association between lactate clearance and mortality in post-cardiac arrest. We hypothesized that outcomes would be better in patients who were better able to clear lactate values following cardiac arrest.Methods: Four-center prospective observational study conducted from 6/11 to 3/12. Inclusion criteria consisted of adult non-traumatic out-of-hospital cardiac arrest patients who were comatose after ROSC. The primary outcome was difference in lactate clearance between survivors and non-survivors at 12 hours and between those with good vs. poor neurological outcome. Secondary outcomes included the association of lactate clearance with neurologic outcome, and the difference in lactate levels at 0, 12, 24, 36, and 48 hours between survivors and non-survivors, and between those with good vs. poor neurological outcome. We used linear mixed-modeling to assess lactate clearance over time and Wilcoxon rank sum test to compare lactate at individual time points. Simple descriptive statistics were used to describe the study population.Results: One hundred patients were analyzed. The median age was 63 years (IQR: 50 -75) and 40% were female. 97% received therapeutic hypothermia and overall survival was 46% (see table) . Survivors and patients with good neurological outcome had lower lactate levels at all time points (p < 0.05, see figure) . Lactate clearance at 12 hours was less effective in non-survivors and in those with poor neurological outcome (p < 0.05). After multivariate adjustment, less effective lactate clearance at 12 hours remained statistically significantly associated with higher mortality (OR 2.2, 95% CI 1.1 -4.4) and worse neurological outcome (OR 2.2, 95% CI 1.1 -4.5).Conclusion: Lower initial lactate as well as more efficient lactate clearance in post-cardiac arrest is associated with improved in-hospital survival and better neurologic outcome Objectives: Lower PaCO 2 , as a surrogate for hyperventilation, and larger BD, correlating with ongoing acidosis, may be associated with decreased survival and worse neurologic outcomes in post-arrest TH patients. We conducted a retrospective chart review of post-arrest patients treated with TH from the PATH database representing 11 institutions. Demographic variables were analyzed using chi-square tests. Unadjusted logistic regression analyses were performed to assess the relationship between hypocapnea (PaCO 2 < 30 mmHg), normocapnea (PaCO 2 30-45 mmHg), hypercapnea (PaCO 2 > 45 mmHg), and mean BD, with respect to mortality or neurologic outcome at first post-arrest ABG and at 6 hrs, 12 hrs, and 24 hrs post-arrest.Results: A total of 179 patients were enrolled. Patients were 59.6 AE 16.4 years, 54% male, and VF/VT was present in 31% of patients. Sixty-six (37%) survived to discharge, of whom 76% (50/66) had good neurologic outcomes (CPC 1 or 2). Survivors had a lower mean initial PCO 2 value than non-survivors (49.0 vs. 57.5 mmHg; p=0.03); survival was significantly higher in patients who had normocapnea (51%) on their initial ABGs when compared to those who were hypocapneic (25%) or hypercapneic (28%) (p=0.007), however, there was no significant difference in survival at the other time points. When adjusting for sex and initial rhythm, hypocapnea and hypercapnea had lower odds for survival in comparison to normocapnea (OR 0.23, 95% CI 0. p=0.18 and OR 0.72, p=0.38) . Survivors had less severe BD than nonsurvivors at initial ABG (-8.0 vs. -11.3; p=0.005 ) and at all subsequent time points. There was no difference between PCO 2 or BD values between patients with good vs. bad neurologic outcomes. Objectives: We hypothesized that patients who have prolonged preinduction times, or shorter times to arrival at target temperature, may suffer worse neurologic outcome than those who have shortened preinduction times and more gradual declines to target temperature.Methods: We conducted a retrospective chart review of TH patients from the PATH database from three institutions with identical TH protocols. Demographic variables were collected and analyzed using chi-square tests. Time points calculated were time from arrest to return of spontaneous circulation (ROSC) (downtime), time from ROSC to initiation of TH (pre-induction), and time from initiation of TH to arrival at target temperature (induction). Mean times were analyzed and logistic regression was used to evaluate for association between time variables and neurologic outcome. Objectives: The objectives of this study were to evaluate external validity and inter-rater reliability of this assessment tool in a setting other than where it was developed.Methods: This was an experimental study conducted in the simulation lab of a tertiary-care pediatric center. A total of 24 residents were videotaped during simulated pediatric resuscitation scenarios. Each resident led five simulated scenarios before and after the Pediatric Advanced Life Support (PALS) course. The pre-and post-PALS scenarios were paired, such that each resident acted as team leader in pulseless non-shockable arrest, pulseless shockable arrest, dysrhythmia, respiratory, and shock scenarios. We altered clinical details of the scenarios to diminish recognition by the residents, but each case within a pair corresponded to the same PALS treatment algorithm. Five subspecialists in the fields of pediatric emergency medicine and intensive care from North America and Europe were trained to evaluate the residents' performance using Grant's assessment tool. Each rater was assigned one of the five scenario pairs and all raters were blinded to the pre-and post-PALS phases of the scenarios. In the absence of a gold standard scoring tool, we used construct validity. It was determined that, for the tool to be valid, participants should improve their scores after participating in the PALS course. Inter-rater reliability was assessed by having two raters independently evaluate the residents' performance. Pre-and post-PALS means were compared using a paired sample Student's t-test. Inter-rater reliability was measured for the five scenarios using intraclass correlation coefficient. Background: Effective teamwork is essential in critical patient management. Simulation is effective for education and assessment, and poses an attractive modality for teamwork training. There is a paucity of data on its efficacy in this area.Objectives: To evaluate the effect of real-time team training on novice learners' management of simulated critically ill patients. Secondary outcomes were improvement in overall teamwork, and quantified differences between real-time and video-based review of simulated cases.Methods: Prospective, randomized experimental study. Students, who are exposed to simulation as individuals and as team members throughout medical school, enrolled in a 1-week intensive course in acute care and were randomized to receive either a 15-minute intervention on effective teamwork with real-time feedback or the standard simulation introduction that includes the importance of teamwork. Standard group students individually managed simulated patient scenarios, which has been the standard for over ten years. Intervention group students managed the same cases in rotating teams of three. On the last day, all students worked in teams of three. (Day 1=shock; Day 3=abdominal emergencies; Day 5=dyspnea). Each team was evaluated using previously validated critical action checklists (unique for each case) in real time by instructors; and subsequently by two video reviewers who also used a pre-validated teamwork assessment tool to gauge how well students functioned as a team. In a subset of cases, we compared real-time and video review completion of critical action checklists. Conclusion: This is the first study of the effect of opiate prescriptions at discharge on ED patient satisfaction scores. Patients given prescriptions did rate the physicians' concern for their comfort significantly higher. Background: Nonmedical prescription drug use, defined as using opiates or sedatives to 'get high', taking someone else's, or taking more than prescribed, is a major problem.Objectives: 1)Identify the prevalence of nonmedical prescription opiate use (NPOU), nonmedical prescription sedative use (NPSU), and the prevalence of dependence and abuse among a population of adolescents and young adults with current drug use, and 2) identify correlates of lifetime NPOU and NPSU.Methods: Patients age 14-24 presenting to an urban ED for care between 2/2010 and 9/2011 were recruited as part of a larger study. Recruitment occurred systematically, and those reporting any drug use in the past 6 months completed a survey using validated measures on drug use, mental health, and violence and a chart review was performed to ascertain ED visit characteristics. Patients presenting with assault-related injuries were oversampled. Correlates of lifetime NPOU and NPSU were examined using logistic regression.Results: Of 1,448 participants screened, 600 endorsed past 6-month drug use. Among this sample, 17% (n=99) reported lifetime NPOU, and 67% of those (n=63) reported NPOU in the past 6 months. Among those with lifetime NPOU, 28% met criteria for prescription opiate dependence or abuse and 42% were at moderate or high risk for problems related to prescription opiates. Additionally, only 52% of people with lifetime NPOU have regular sources of care and 69% had utilized the ED in the prior 6 months. In the multiple logistic regression analysis, correlates of lifetime NPOU included identifying as Caucasian and cocaine use. Similarly, 20% (n=118) of the sample endorsed lifetime NPOU and 59% of those reported NPSU in the past six months. Among those with lifetime NPSU, 28% met criteria for prescription sedative dependence or abuse and 42% were at moderate or high risk for problems related to prescription sedative use. Correlates of lifetime NPSU included Caucasian race, cocaine use, current depression, alcohol misuse, and peer violence. Those with marijuana use were less likely to report NPOU and NPSU.Conclusion: Among adolescents and young adults who have used drugs in the past 6 months, nonmedical use of prescription opiates and sedatives are common, with 28% meeting criteria for dependence and abuse. The ED is a routine source of prescription drugs and is a critical place for nonmedical prescription drug use intervention. Objectives: To determine the association between statewide opiate prescribing patterns and prescription drug related mortality rates.Methods: This ecological study evaluated trends in opiate prescribing from 2001-2010 in Florida. Information for all oxycodone, hydrocodone, morphine, fentanyl, and methadone purchased by hospitals, pharmacies, and practitioners in Florida was obtained from US DEA. Purchases were excluded if they were bought out of state. Information on specific opiate-related causes of death was obtained from the state medical examiner office. When more than one drug was present, the medical examiner's determination of the specific drug cause of death was used for categorization. Deaths where drug was present but not deemed to be the cause were excluded. Morphine-and fentanylrelated mortality prior to 2003 were excluded due to unavailable data. Using Pearson correlation, we analyzed the prescription trends of five prescription opiates and their relationship to population-adjusted drugspecific mortality rates.Results: From 2001-2010, sales of oxycodone, hydrocodone, methadone, morphine, and fentanyl in the state of Florida increased from (in millions of g): 2.0 to 12.4, 1.1 to 1.9, 0.17 to 1.46, 0.62 to 1.7, and 0.13 to 0.29, respectively. Overall prescription opiate-related annual mortality rate (per 100,000 population) increased from 4.1 to 16.8 for this period. The largest increase occurred for oxycodone-and methadone-related mortality rates which increased from 1.94 to 8.06 and 1.09 to 3.69 respectively. Pearson correlation coefficients (95%CI) for the association between annual prescription amounts and drugspecific mortality rates for oxycodone, hydrocodone, methadone, morphine, and fentanyl were 0.99 (0.98 to 1), -0.21 (-0.74 to 0.483), 0.79 (0.31 to 0.94), 0.53 (-0.28 to 0.89), and -0.22 (-0.80 to 0.57), respectively.Conclusion: Increased prescriptions for oxycodone and methadone were positively associated with drug-specific mortality rates. Oxycodone prescriptions were significantly more associated with increased mortality compared to any other prescription opiate. Objectives: We hypothesize that exposure to OP introduces measurable changes in the control of respiration during sleep.Methods: Spontaneously breathing Wistar rats (n=8) were habituated to a whole-body plethysmography chamber and allowed to fall asleep before exposure to escalating doses of CO2 (0, 2, 4, 6, 8, and 10% CO2). Recordings of respiratory response were performed using software designed for this purpose (Data Science International, St Paul, MN). Following this, the animals were exposed to a sub-lethal dose of dichlorvos (22 mg/kg SQ, or two-thirds of the LD50). Animals were followed post-poisoning until recovery or death. One to four days postexposure, animals that survived were placed in the whole-body plethysmography chamber again to record respiratory response to the same increasing levels of CO2 during periods of sleep. Data were analyzed using Student's t-test or ANOVA.