NAEMSP – National Association of Emergency Physicians

NAEMSP AM 17

Pour les lecteurs d’e-triage, le meilleur du dernier congrès de la National Association of Emergency Physicians (NAEMSP)
Avec l’aimable autorisation de Prehospital Emergency Care, Journal officiel de la NAEMSP

DOES COMPLIANCE WITH THE AHA GUIDELINE RECOMMENDATIONS FOR CPR QUALITY PREDICT SURVIVAL FROM OUT-OF-HOSPITAL CARDIAC ARREST?
Sheldon Cheskes, Rob Schmicker, Laurie Morrison, Tom Rea, Brian Grunau, Ian Drennan, Brian Leroux, Christian Vaillancourt, et al, Sunnybrook Centre for Prehospital Medicine, University of Toronto
Background: Measures of chest compression fraction (CCF), compression rate, compression depth and pre-shock pause have all been independently associated with improved outcomes from out-of-hospital cardiac arrest (OHCA). However, it is unknown whether compliance with the American Heart Association (AHA) guideline recommendations for cardiopulmonary resuscitation (CPR) quality predicts survival from OHCA.
Methods: We performed a secondary analysis of prospectively collected data from the Resuscitation Outcomes Consortium Cardiac Arrest Epistry database. As per the 2015 AHA guidelines, high quality CPR was defined as CCF >0.8, chest compression rate 100-120/minute, chest compression depth 50-60 mm, and pre-shock pause <10 seconds. Multivariable logistic regression models controlling for Utstein variables were used to assess the relationship between compliance with AHA CPR quality benchmarks and survival to hospital discharge and neurologically intact survival with Modified Rankin Score (MRS) ≤3. The reference standard was cases that did not meet all CPR quality benchmarks. Due to potential confounding between CPR quality metrics and cases that achieved early return of spontaneous circulation (ROSC), we performed a subgroup analysis restricted to patients who obtained ROSC after ≥10 minutes of EMS resuscitation.
Results: 35,445 defibrillator records were collected over a 4-year period ending in June 2015 of which 19,558 (55.2%) had complete CPR quality data. For the primary model (CCF, rate, depth), there was no significant difference in survival for resuscitations that met all CPR quality benchmarks compared to the reference standard (OR 1.26; 95% CI: 0.80, 1.97). When the dataset was restricted to patients obtaining ROSC after ≥10 minutes of EMS resuscitation (n=4,158), survival was significantly higher for those resuscitations that met all CPR quality benchmarks (OR 2.17; 95% CI: 1.11, 4.27) compared to the reference standard. For this subset of patients, compliance with all CPR benchmarks was also associated with greater odds of neurologically intact survival with MRS ≤3 (OR 2.95; 95% CI: 1.12, 7.81).
Conclusions: Compliance with the current AHA guidelines for CPR quality was associated with improved survival for resuscitations with ROSC after ≥10 minutes of EMS resuscitation. Our findings suggest CPR quality is an important predictor of survival when controlling for length of resuscitation.

VENTRICULAR FIBRILLATION QUANTITATIVE ELECTROCARDIOGRAM MEASURES ASSOCIATED WITH RETURN OF ORGANIZED RHYTHM IN OUT-OF-HOSPITAL CARDIAC ARREST
Matthew L. Sundermann, James J. Menegazzi, et al. University of Pittsburgh
Background: Out-of-hospital cardiac arrest (OHCA) is a major cause of mortality, and ventricular fibrillation (VF) is a common electrocardiogram (ECG) presentation of OHCA. Quantitative ECG (QECG) metrics of the VF waveform, including Amplitude Spectrum Area (AMSA), median slope (MS), and centroid frequency (CF), may have utility for guiding defibrillation and CPR. Even so, VF QECG measures have yet to be translated to prehospital care. We sought to use data from a large contemporary resuscitation trial to further understand their utility. We hypothesized that QECG metrics would be associated with return of organized rhythm (ROOR) in OHCA.
Methods: Data from prehospital, EMS- treated cardiac arrests from 2011 to 2015, enrolled in the Continuous Chest Compression trial, were obtained from 7 ROC sites. Data were downloaded from monitors using manufacturer software. Signal data were then extracted from the downloaded files using a custom Matlab program (Mathworks Inc, Natick, MA). ECG pre-shock segments used for QECG analysis included ECG following the last chest compression before a shock, up to the time immediately before the shock. Return of organized rhythm (ROOR) was defined as a regularly occurring complex, regardless of rate or QRS width, during the largest compression gap in a 3-minute period post-shock. AMSA, MS, and CF, were calculated as the mean of all available consecutive 3-second ECG segments that were free of compression artifact. Logistic regression was performed for each QECG measure using an outcome of ROOR, with separate models for total shocks and first shocks. Statistics were performed with STATA (StataCorp LP, College Station, TX).
Results: 3,941 total shocks and 999 first shocks were found in 1,842 unique OHCA cases. ROOR rate for all shocks was 25.7% and ROOR rate per case was 40.28%. QECG odds ratios for ROOR in total shocks were AMSA 1.07(1.05-1.09) p <.001, MS 1.48(1.33-1.1.65) p<001, CF 7.89(.75-8.78) p = .130). QECG odds ratios for ROOR from first shock were AMSA 1.06(1.03-1.09) p <.001, MS 1.37(1.18-1.58) p <.001, CF 4.52(.728-28.01) p = .105.
Conclusions: In this large cohort of EMS-treated OHCA patients with a recorded shock, AMSA and MS were significantly associated with ROOR.

COMPRESSION-TO-VENTILATION RATIO AND INCIDENCE OF REARREST: A SECONDARY ANALYSIS OF THE ROC CCC TRIAL
David D. Salcido, Tom P. Aufderheide, Allison C. Koller, Heather Herren, Jack Nuttall, Matthew L. Sundermann, The Resuscitation Outcomes Consortium, University of Pittsburgh
Background: When an out-of-hospital cardiac arrest (OHCA) patient achieves return of spontaneous circulation (ROSC), but subsequently has another cardiac arrest prior to hospital arrival, the probability of survival to hospital discharge is significantly decreased. Very few modifiable factors for re-arrest are known. We examined the association between re-arrest and compression-to-ventilation ratio during cardiopulmonary resuscitation (CPR) and outcomes. We hypothesized that re-arrest incidence is similar between cases treated with 30:2 or continuous chest compression (CCC) CPR, but inversely related to survival and good neurological outcome.
Methods: This was a secondary analysis of a large randomized controlled trial of CCC versus 30:2 CPR for the treatment of OHCA between 2011 and 2015 at 8 sites of the Resuscitation Outcomes Consortium (ROC). Patients were randomized through an emergency medical services (EMS) agency-level via cluster randomization design to receive either 30:2 or CCC CPR. Case data were derived from electronic prehospital patient care reports, digital defibrillator files, and hospital records. The primary comparison was the proportion of patients with a re-arrest between cases stratified by compression-to-ventilation as-treated group. We also assessed the association between re- arrest and both survival to hospital discharge and favorable neurological outcome (Modified Rankin Score MRS ≤ 3) using multivariable logistic regression adjusting for age, sex, initial rhythm and measures of CPR quality.
Results: There were 14,109 analyzable cases who have definitively received either CCC or 30:2 CPR. Of these, 4,713 had prehospital ROSC and 2,040 (43.2%) had at least one re-arrest. Incidence of re-arrest was not significantly different between CCC and 30:2 groups (44.1% vs. 42.8%, p = 0.12). After controlling for patient and treatment characteristics, re-arrest was significantly associated with lower survival (OR: 0.46, 95%CI: 0.36-0.51) and worse neurological outcome (OR: 0.46, 95%CI: 0.38, 0.55).
Conclusions: Re-arrest occurrence was not significantly different between patients receiving CCC and 30:2, and was inversely associated with survival to hospital discharge and MRS.

WIDESPREAD IMPLEMENTATION OF A PREHOSPITAL SELECTIVE SPINAL MOTION RESTRICTION PROTOCOL IS NOT ASSOCIATED WITH INCREASED SPINAL CORD INJURY
Franco Castro-Marin, Joshua B. Gaither, Robyn N. Blust, Vatsal Chikani, Anne Vossbrink, Rogelio Martinez, Bentley J. Bobrow, HonorHealth Emergency Department
Background: The traditional approach of comprehensive spinal immobilization (SI) has evolved into a more selective process in order to reduce morbidity associated with long spine boards and cervical collars. While relatively small studies have shown selective SI, or spinal motion restriction (SMR), to be safe, large outcome studies are limited. We sought to determine the prevalence of spinal cord injury (SCI) before and after the implementation of selective SMR protocols by multiple EMS agencies across Arizona.
Methods: EMS encounters entered into the State EMS database (660,084) were matched to hospital discharge data between January 1, 2013 and June 30, 2015 with a linkage rate of 86% (567,719). Pre- and post-SMR protocol implementation cohorts were identified based on agency protocol implementation date, excluding a 3-month run-in period. EMS encounters with unknown implementation dates and duplicate encounters were excluded. The primary outcome was to compare the prevalence of SCI (ICD-9 codes 806.x or 952.x) between the pre- and post-SMR cohorts. The prevalence of SCI was determined using the entire population as well as three sub-groups: trauma (T) (ICD-9 code 800-959), spinal trauma possible (ST-P) (CDC Barell Injury matrix: other head, face, neck, spine and back) and spinal trauma verified (ST-V) (ICD-9 952.x, 839.x, 806.x, or 805.x). Analyses were performed using Chi-squared tests.
Results: Sixty-three EMS agencies with a known SMR implementation status were included in the analysis. Of these, 52 transitioned to an SMR protocol. Of the 417,979 EMS encounters included in the full study population, three sub-groups were identified: 99,065 T cases, 47,686 ST-P cases, 4,505 ST-V cases. There were a total of 226 SCI cases. The prevalence of SCI in the pre- and post-SMR implementation cohorts was: 0.05% v 0.06%, 0.22% v 0.24%, 0.45% v 0.50%, and 4.86% v 5.14%, in the four populations. There was no statistically significant difference between the proportion of patients with SCI before or after SMR protocol implementation (p-values > 0.400 for all populations).
Conclusions: No significant increase in the prevalence of SCI was observed across a very large population and multiple sub-groups following the widespread implementation of selective SMR protocols.

THE IMPACT OF BURNOUT ON THE EMS WORKFORCE
Remle P. Crowe, Julie K. Bower, Rebecca E. Cash, Ashish R. Panchal, Severo A. Rodriguez, Susan E. Olivo- Marston, The National Registry of EMTs
Background: Burnout is a major workforce concern for Emergency Medical Services (EMS). However, no national estimates exist. Our objectives were to 1) estimate the prevalence of burnout among Emergency Medical Technicians (EMTs) and paramedics, 2) identify characteristics predictive of burnout and 3) assess the relationship between burnout and factors that negatively impact the workforce. We hypothesized that burnout would be associated with more reported sick days and greater reported likelihood of leaving EMS.
Methods: A random sample of 21,160 nationally-certified EMTs and paramedics was selected to receive an electronic questionnaire. The questionnaire utilized the Copenhagen Burnout Inventory (CBI), a validated instrument that measures burnout in three dimensions: personal, work-related and patient-related. Survey weights for non-response by certification level, gender and race/ethnicity were applied. Multivariable logistic regression models were used to estimate adjusted odds ratios (ORs) and 95% confidence intervals (95%CI) to quantify the association of employment characteristics with burnout in each dimension. We also assessed the association of burnout with reporting more than 10 sick days over the past 12 months and reported likelihood of leaving EMS.
Results: We received 2,650 responses (response rate=13%). More paramedics exhibited burnout in each dimension compared to EMTs: personal (38.3% vs. 24.9%, p<0.05), work- related (30.1% vs. 19.1%, p<0.05), and patient-related (14.4% vs. 5.5%, p<0.05). The final model for personal burnout was adjusted for provider level, experience, sex, agency type and weekly call volume. Predictors of work-related burnout included provider level, experience, agency type and weekly call volume. Variables associated with patient-related burnout included provider level, sex, weekly call volume and education. After controlling for variables associated with each dimension, increased odds of reporting 10 or more sick days were observed for those with personal (OR:2.32, 95%CI:1.39-3.87), work- related (OR:2.30, 95%CI:1.39-3.83), or patient-related burnout (OR:2.35, 95%CI:1.25-4.42). Odds of reporting being likely to leave the EMS profession were elevated for those with personal (OR:2.70, 95%CI:1.94-3.74), work-related (OR:3.43, 95%CI:2.47-4.75), or patient-related burnout (OR:3.69, 95%CI:2.42-5.63).
Conclusions: Burnout was associated with greater reported sickness absence and likelihood of leaving the EMS profession. Future initiatives to reduce burnout among EMS professionals may positively impact the workforce.

EVALUATION OF PREHOSPITAL HYPOXIA “DEPTH-DURATION DOSE” AND MORTALITY IN MAJOR TRAUMATIC BRAIN INJURY
Daniel W. Spaite, et al, Arizona Emergency Medicine Research Center, University of Arizona
Background: Prehospital hypoxia dramatically increases mortality in Traumatic Brain Injury (TBI). However, nearly the entire literature is based upon the simple dichotomy of whether patients had a hypoxic event [O2 saturation (SpO2) <90%] or not. Thus, essentially nothing is known about the influence of the depth or duration of prehospital hypoxia on outcome. Using the statewide, comprehensive, linked EMS data in the Excellence in Prehospital Injury Care (EPIC) TBI Study (NIH- 1R01NS071049) that contains all recorded SpO2s/associated times, we evaluated the association between the prehospital hypoxia ,“depth-duration dose,” and mortality in major TBI.
Methods: We evaluated the moderate/severe TBI cases (CDC Barell Matrix-Type 1) in the EPIC pre-implementation cohort (before TBI guideline implementation, 16,711 cases, 1/07-6/15). Logistic regression was used to determine the association between the probability of death and the depth-duration dose (,“dose,”) of hypoxia, adjusted for potential confounders and other risk factors. Hypoxic dose was defined as the area circumscribed by the patient, ‘s SpO2 curve over time and the 90% threshold for the entire duration that a patient is hypoxic (units: percent-minutes).
Results: After exclusions [age<10 (6.8%), transfers (29.3%), and less than two valid SpO2 measurements with time stamps (19.4%)] 7,432 cases remained (median age 41, 70.1% male). The logistic model revealed a monotonically increasing relationship between hypoxic dose and adjusted probability of death [(adjusted OR = 1.16 (95% CI 1.11-1.22) for log2 dose]. Thus, with other factors being equal, in patients with hypoxia, a doubling of the hypoxic dose yields an increase of 16% in adjusted odds of death. Case example: SpO2 drops to 80% for 10 min (dose=100 percent-min) has 16% higher odds of dying than one with hypoxic dose of only 50 (e.g., 85% for 10 min or 80% for 5 min).
Conclusions: Historically, oxygenation has been assessed dichotomously in TBI (either the patient was hypoxic or not). These results demonstrate that the depth-duration ,“dose,” of prehospital hypoxia is strongly associated with mortality and may differentiate risk among hypoxic patients. Thus, hypoxia may exert a spectrum of effects, and its influence on outcome may be much more complex than is inferred by the current literature.

CAN EMS PROVIDERS PROVIDE APPROPRIATE TIDAL VOLUMES IN A SIMULATED ADULT-SIZED PATIENT WITH A PEDIATRIC-SIZED BAG-VALVE-MASK?
Melissa Kroll, et al., Washington University School of Medicine
Background: In the prehospital setting, Emergency Medical Services (EMS) professionals rely on providing positive pressure ventilation with a bag-valve-mask (BVM). Multiple emergency medicine and critical care studies have shown that lung-protective ventilation protocols reduce morbidity and mortality. Our primary objective was to determine if a group of EMS professionals could provide ventilations with a smaller BVM that would be sufficient to ventilate patients. Secondary objectives included 1) if the pediatric bag provided volumes similar to lung-protective ventilation in the hospital setting and 2) compare volumes provided to the patient depending on the type of airway (mask, King tube, and intubation).
Methods: Using a patient simulator of a head and thorax that was able to record respiratory rate, tidal volume, peak pressure, and minute volume via a laptop computer, participants were asked to ventilate the simulator during six, 1-minute ventilation tests. The first scenario was BVM ventilation with an oropharyngeal airway in place ventilating with both an adult- and pediatric-sized BVM, the second scenario had a supraglottic airway and both bags, and the third scenario had an endotracheal tube and both bags. Participants were enrolled in convenience manner while they were on-duty, and the research staff was able to travel to their stations. Prior to enrolling, participants were not given any additional training on ventilation skills.
Results: We enrolled 50 providers from a large, busy, urban fire-based EMS agency with 14.96 (SD= 9.92) mean years of experience. Only 1.5% of all breaths delivered with the pediatric BVM during the ventilation scenarios were below the recommended tidal volume. A greater percentage of breaths delivered in the recommended range occurred when the pediatric BVM was used (17.5% vs. 5.1%, p < 0.001). Median volumes for each scenario were 570.5mL, 664.0mL, 663.0mL for the pediatric BMV and 796.0mL, 994.5mL, 981.5mL for the adult BVM. In all three categories of airway devices, the pediatric BVM provided lower median tidal volumes (p < 0.001).
Conclusions: The study suggests that ventilating an adult patient is possible with a smaller, pediatric-sized BVM. The tidal volumes recorded with the pediatric BVM were more consistent with lung-protective ventilation volumes.

DIFFERENTIAL CORRELATION OF ETCO2 AND CPR QUALITY BETWEEN OUT-OF-HOSPITAL ARRESTS OF CARDIAC AND RESPIRATORY ETIOLOGY
Chengcheng Hu, Dan W. Spaite, Margaret Mullins, Tyler Vadeboncoeur, Bentley Bobrow, Department of Epidemiology and Biostatistics, University of Arizona
Background: While modest correlation between end-tidal CO2 (ETCO2) and CPR quality has been reported among patients who have arrested from presumed cardiac etiology, it is unknown whether this correlation exists in arrests of respiratory etiology. We compared the correlation between ETCO2 and CPR quality among these two groups.
Methods: ETCO2 was monitored with side-stream CO2 (Philips/Respironics or Oridion) and CPR quality with an accelerometer-based system (E/X Series, ZOLL Medical) during treatment of consecutive adult (age 18+) OHCA patients with presumed cardiac or respiratory etiology by two EMS agencies in Arizona (October 2008-June2015). Minute-by-minute ETCO2 and CPR quality data were extracted. Linear mixed effect models were fitted to use (log transformed) ETCO2 level to predict four CPR variables: chest compression (CC) depth, (log) CC rate, CC release velocity (CCRV), and (log) ventilation rate (VR). An interaction term was used to test for differential correlation between the 2 groups. A random intercept for each case was included and a spatial power covariance structure assumed for measurements over time.
Results: A total of 399 subjects (median age: 68 yrs, 63% male, 374 cardiac etiology, 25 respiratory) with 2,812 minutes of data were studied. ETCO2 was correlated with CC rate for respiratory etiology (p = 0.011) but not for cardiac etiology and the difference was marginally significant (p = 0.085). ETCO2 was correlated with VR for cardiac etiology (p<.0001) but not for respiratory etiology (p = 0.009 for the difference between etiologies). Doubling ETCO2 was associated with an increase of 8.7mm/s (95% CI: 3.9, 13.5) in CCRV for cardiac etiology and 12.1mm/s (95% CI: -1.8, 26) for respiratory etiology, but the difference between etiologies was not significant. Correlation between ETCO2 and CC depth was similar between the 2 groups. In both cohorts, ETCO2 explained <10% of the variance in each CPR variable.
Conclusions: Correlations between ETCO2 and certain CPR variables were different for patients with cardiac vs. respiratory etiology. ETCO2 may be not be an adequate substitute for CPR quality monitoring in either situation. Future studies are needed to determine how ETCO2 and CPR quality monitoring can be used in combination to optimize CPR.

REMOTE ISCHEMIC CONDITIONING TO REDUCE REPERFUSION INJURY DURING ACUTE STEMI: A SYSTEMATIC REVIEW AND META-ANALYSIS
Sheldon Cheskes, Ala Iansavitchene, Shelley L. McLeod, University of Toronto
Background: Remote ischemic conditioning (RIC) is a non-invasive therapeutic strategy that uses brief cycles of inflation and deflation of a blood pressure cuff to reduce ischemia-reperfusion injury during acute ST-elevation myocardial infarction (STEMI). The primary objective of this systematic review was to determine if RIC initiated prior to catheterization increases myocardial salvage index, defined as the proportion of area at risk of the left ventricle salvaged by treatment following emergent percutaneous coronary intervention (PCI) for STEMI. Secondary outcomes included infarct size and major adverse cardiovascular events.
Methods: Electronic searches of Medline, EMBASE and Cochrane Central Register of Controlled Trials were conducted, and reference lists were hand-searched. Randomized controlled trials comparing PCI with and without RIC for patients with STEMI published in English were included. Two reviewers independently screened abstracts, assessed quality of the studies, and extracted data. Data were pooled using random-effects models and reported as mean differences (MD) and risk ratios (RR) with 95% confidence intervals (CIs). The Grading of Recommendations, Assessment, Development and Evaluation (GRADE) criteria were used to evaluate the quality of evidence of outcomes.
Results: Nine RCTs were included with a combined total of 999 patients (RIC+PCI = 534, PCI = 465). The myocardial salvage index was higher in the RIC+PCI group at 3 and 30 days, mean difference 0.09 (95% CI: 0.04, 0.15) and 0.12 (95% CI: 0.03, 0.21), respectively. Infarct size was reduced in the RIC+PCI group at 3 and 30 days, mean difference -3.82 (95% CI: -8.15, 0.51) and -4.00 (95% CI: -7.07, -0.93), respectively. There was no statistical difference with respect to death and re-infarction, however there was a reduction in heart failure with RIC+PCI at 6 months, RR: 0.43 (95% CI: 0.19, 0.99). All outcomes were judged to be of moderate quality of evidence using GRADE criteria except for heart failure, which was determined to be low quality. Conclusions: RIC is emerging as a promising adjunctive treatment to PCI for the prevention of reperfusion injury in STEMI patients. Ongoing, multicenter clinical trials will help elucidate the effect of RIC on clinical outcomes such a hospitalization, heart failure and mortality.

CPR QUALITY DURING OUT-OF-HOSPITAL CARDIAC ARREST TRANSPORT
Sheldon Cheskes, Cathy Zhan, Adam Byers, Dennis Ko, Richard Verbeek, Ian Drennan, Ahmed Taher, Steve Lin, Steve Brooks, Jason Buick, Laurie J. Morrison, Sunnybrook Centre for Prehospital Medicine, Rescu, Li Ka Shing Knowledge Institute
Background: Previous studies have demonstrated significant associations between cardiopulmonary resuscitation (CPR) quality metrics and survival to hospital discharge following out-of-hospital cardiac arrest (OHCA). No large study has explored the relationship between location of resuscitation (scene vs. transport) and CPR quality. We sought to determine the impact of CPR location on CPR quality metrics during OHCA.
Methods: We performed a retrospective cohort study of prospectively collected data from the Toronto RescuNET Epistry – cardiac arrest database. We analyzed CPR quality data from all treated adult OHCA occurring over a 39-month period beginning January 1, 2013. We included OHCA patients who underwent resuscitation by emergency medical services (EMS) and had CPR quality metric data for both scene and transport phases of the resuscitation. High quality CPR (based on 2010 AHA guidelines) was defined as chest compression fraction (CCF)> 0.70, compression rate >100/min. and compression depth > 5.0 cm. Scene and transport CPR quality metrics were compared for each patient using a two- sided Wilcoxon rank-sum paired-samples test. The proportion of patients who received high quality CPR (defined as meeting all 3 CPR quality benchmarks) was compared between resuscitation locations using a chi-square statistic.
Results: Among 842 included patients (69.5% male, mean (SD) age 66.8,±17.0), median compression rate was statistically higher on scene compared to transport (105.8 vs. 102.0, ∆ 3.8, 95% CI: 2.5, 4.0, p<0.001), while median compression depth (5.56 vs. 5.33, ∆ 0.23, 95% CI: 0.12, 0.26, p<0.001) and median CCF (0.95 vs. 0.87, ∆ 0.08, 95% CI: 0.07, 0.08, p<0.001) were statistically higher during the transport phase. The proportion of patients meeting the definition of high quality CPR was similar on scene compared to during transport (45.8% vs. 42.5%, ∆ 3.3, 95% CI: -1.4, 8.1, p=0.17).
Conclusions: High quality CPR metrics were identified in both (scene and transport) locations of resuscitation and exceeded current CPR quality benchmarks. These results suggest that high quality, manual compressions can be performed by well-trained EMS systems regardless of location. Further study is required to determine whether these metrics can be replicated in other EMS jurisdictions.

HEAD UP CARDIOPULMONARY RESUSCITATION LOWERS INTRACRANIAL PRESSURE AND IMPROVES CEREBRAL PERFUSION PRESSURE DURING PROLONGED CPR IN A PORCINE MODEL OF VENTRICULAR FIBRILLATION
Johanna C. Moore, Nicolas Segal, Michael Lick, Guillaume Debaty, Kenneth Dodd, Bayert Salverda, Keith G. Lurie, Department of Emergency Medicine, Hennepin County Medical Center
Background: The head up position (HUP) during CPR has recently been found to improve cerebral perfusion pressure (CerPP) and cerebral blood flow (CBF). It is unknown if HUP over a prolonged period of CPR will result in decreased cerebral flow due to pooling of blood in the abdomen and lower extremities. We therefore assessed CBF and CerPP during prolonged CPR.
Methods: Female pigs (38-42 kg) were sedated, intubated, and anesthetized. Vascular and intracranial access were obtained for monitoring and injection of microspheres for measurement of blood flow. Ventricular fibrillation was induced and after 8 minutes, automated Active Compression Decompression (ACD) CPR with an Impedance Threshold Device (ITD) was performed (compression: ventilation ratio of 30:2) for 2 minutes. Pigs were then prospectively randomized to the HUP or supine position (SUP) and CPR continued for another 18 minutes. Microspheres were injected at baseline, 5, and 15 minutes. The primary endpoint of this ongoing study is CBF at 15 minutes. Secondary endpoints include CerPP at 19 minutes and other hemodynamic parameters at 19 minutes. Endpoints were analyzed using an unpaired t-test and expressed as mean ± SD.
Results: Baseline data were similar between groups. To date, cerebral and cardiac blood flow respectively (ml/min/g) after 15 min of CPR were similar but trended higher: 0.80 ± 0.83, 0.80 ± 1.10 for the HUP (n =7) group and 0.39 ± 0.49, 0.66 ± 0.75 in the SUP (n =7) group (p=n.s.). After 19 minutes of CPR, CerPP (mmHg) in the HUP group (n = 10) was higher than the SUP (n = 8) group (25 ± 13 vs. 9 ± 17,p = 0.038). Coronary Perfusion Pressures were similar (HUP 11 ± 12 vs. SUP 5 ± 15, p = n.s.). Mean ICP was lower in the HUP group (mmHg, -2.6 ± 3 vs. 12 ± 3, p<0.0001).
Conclusions: In this ongoing study, CerPP was higher and ICP values lower with HUP CPR vs. SUP CPR over a prolonged CPR period. Further animals are needed for definitive determination of cerebral and cardiac blood flow.

ACTIVE INTRATHORACIC PRESSURE REGULATION DURING POST-CARDIAC ARREST CARE SIGNIFICANTLY REDUCES VASOPRESSOR REQUIREMENTS, IMPROVES CEREBRAL BLOOD FLOW AND NEUROLOGICALLY INTACT SURVIVAL
Anja Metzger, Michael Lick, Paul Berger, Nicolas Segal, Aaron Robinson, Johanna Moore, Keith Lurie, University of Minnesota
Background: Post-cardiac arrest care is a critical aspect of survival after cardiac arrest. Active intrathoracic pressure regulation (a-IPR) therapy consists of the delivery of a positive pressure ventilation followed by a sub-atmospheric expiratory phase pressure of -10 cmH2O. A-IPR has previously been shown to improve cerebral hemodynamics in subjects with brain injury and low perfusion states. This study tested the hypothesis that a-IPR applied during a 6-hour post-ROSC period would enhance cerebral hemodynamics and require less vasopressor support.
Methods: After 10 minutes of untreated ventricular fibrillation and 6 minutes of active compression decompression (ACD) CPR plus an impedance threshold device (ITD), 12 female pigs (38.9 ± 0.9 kg) were randomized into 2 post-ROSC treatment groups, one with continuous a-IPR therapy, the other without a-IPR therapy. A target mean arterial pressure (MAP) of 75 mmHg was achieved through controlled infusion of an epinephrine solution (0.002 mg/ml). MAP, vasopressor requirements and cerebral blood flow (CBF) were recorded continuously for 6 hours. Student’s t-tests were used for statistical comparisons. Data are expressed as mean ± SD.
Results: MAP throughout the study was matched between groups (78.2 ± 1.9 vs. 74.5 ± 1.9 for a-IPR) through careful control of vasopressors. Total epinephrine during the post-ROSC period was significantly reduced with a-IPR (0.08 ± 0.09 vs. 0.29 ± 0.12 mg, p<0.01). Vasopressor support was most needed during the second hour of post-ROSC care with the control group requiring four times more epinephrine than the a-IPR group (0.05 ± 0.07 vs. 0.19 ± 0.07 mg, p<0.01). Mean CBF was significantly higher in the a-IPR group for the first 5 hours of post-ROSC care (30.4 ± 6.6 vs. 19.1 ± 9.8 ml/100gm/min, p<0.05). All of the a-IPR treated animals had a cerebral performance category score at 24 hours of 1, whereas the non a-IPR treated animals had CPC scores of 1 (n=3), 2 (n=2), and 5 (n=1).
Conclusions: Addition of a-IPR therapy post-ROSC required less vasopressor support and improved cerebral hemodynamics. A-IPR has the potential to treat cardiovascular instability and brain injury during the post-ROSC period, two major detriments to survival after cardiac arrest.

THE PROLIFERATION OF MOBILE TELEPHONES HAS REDUCED SOME MOTOR VEHICLE CRASH NOTIFICATION TIMES, BUT NOT CRASH FATALITY RATES
Jimmy L. Stickles, Lawrence H. Brown, UT Austin Dell Medical School Emergency Medicine Residency
Background: The purpose of this study was to evaluate whether increased proliferation of mobile telephones has been associated with decreased MVC notification times and/or decreased MVC fatality rates in the United States (U.S.).
Methods: This retrospective ecological study used World Bank annual mobile phone market penetration data and U.S. Fatality Analysis Reporting System (FARS) fatal MVC data for 1994-2014. For each year, phone proliferation was measured as mobile phones per 100 population. Crash-specific FARS data were used to calculate MVC notification time (time EMS notified – time MVC occurred) in minutes. Summary FARS data were used to determine the MVC fatality rate per billion vehicle miles travelled (BVMT). We used basic vector auto-regression modeling to explore relationships between changes in phone proliferation and subsequent (following year, two years later, and three years later) changes in median and 90th percentile MVC notification times, as well as MVC fatality rates. Underlying unrelated time trends were accounted for by analyzing second differences in the data.
Results: From 1994 to 2014, mobile phone proliferation in the U.S. increased from 6.1 to 98.4 phones per 100 people. Median MVC notification times decreased from 3 minutes to 1 minute, and 90th percentile MVC notification times decreased from 15 to 10 minutes. MVC fatality rates decreased from 17.3 to 10.8 fatalities per BVMT. Larger year-over-year increases in phone proliferation were associated with larger decreases in 90th percentile notification times for MVCs occurring during daylight hours (p=0.004) and on the national highway system (p=0.046) two years subsequent, and crashes off the national highway system three years subsequent (p=0.023). There were no significant associations between changes in phone proliferation and subsequent changes in median notification times. There were also no significant associations between changes in phone proliferation and subsequent changes in MVC fatality rates.
Conclusions: Increased mobile phone proliferation has been associated with shorter 90th percentile EMS notification times for a subset of U.S. MVCs, but not with reduced MVC mortality rates.

ACCURACY OF PREHOSPITAL DOCUMENTATION OF HYPOXIA COMPARED TO CONTINUOUS NON- INVASIVE MONITOR DATA TRACKING IN MAJOR TRAUMATIC BRAIN INJURY
Bruce J. Barnhart, Daniel W. Spaite, Eric Helfenbein, Octavio Perez, Saeed Babaeizadeh, Chengcheng Hu, Vatsal Chikani, Joshua B. Gaither, Duane Sherrill, Kurt R. Amber D. Rice, Bentley J. Bobrow, Arizona Emergency Medicine Research Center, University of Arizona
Background: It is well established that prehospital hypoxia dramatically increases mortality in Traumatic Brain Injury (TBI). Thus, in EMS TBI research, case ascertainment and risk-adjustment are highly dependent upon documentation of in-field O2 saturation. Our objective was to compare the rate of hypoxia identified by EMS personnel and documented in EMS patient care records (PCR) versus the actual rate of hypoxia recorded by continuous, non-invasive monitoring in TBI. Methods: A subset of major TBI cases (moderate/severe) in the Excellence In Prehospital Injury Care (EPIC) TBI Study (NIH 1R01NS071049) were evaluated (March 30, 2013-June 26, 2015). Cases from 4 EMS agencies that report continuous monitor data (Philips MRxTM) as part of EPIC were included. All monitor data available for post-hoc review were displayed and accessible to the providers during EMS care. We compared PCR documentation of hypoxia (O2 sat <90%) to actual recorded monitor data on each patient (Fisher’s Exact Test, α=0.05).
Results: 77 cases were included (median age: 52, 65% male). The monitors displayed and recorded 16 hypoxic cases (20.8%), but only 6 (37.5%) were documented. Thus, while the rate of actual hypoxia was 20.8%, the case ascertainment was only 7.8% (6/77) when PCR documentation alone was used (p=0.036).
Conclusions: Among patients with major TBI, monitor-identified hypoxia occurred much more frequently (20.8%) than was documented by EMS personnel (7.8%). Only 37.5% of cases with actual hypoxia were recorded in the PCRs. This may be explained, in part, by the fact that pulse oximetry occurs continuously. Thus, ongoing care responsibilities and scene distractions may cause providers to miss low readings as they fluctuate moment-by-moment. This has significant clinical implications as a potential hidden contributor to poor outcomes if hypoxia goes unrecognized (and untreated) rather than simply not being documented. Furthermore, these findings have important implications for case ascertainment, confounding, and risk-adjustment in EMS TBI studies. Whenever possible, quality improvement and research projects should utilize continuous non-invasive monitor data to identify and evaluate hypoxic patients in the setting of TBI. These findings may also have implications for identifying hypoxia in EMS patients with other critical conditions.

PROSPECTIVE PREHOSPITAL EVALUATION OF THE CINCINNATI PREHOSPITAL STROKE SEVERITY SCALE
Jason McMullan, Brian Katz, Joseph Broderick, Pamela Schmit, Heidi Sucharew, Opeolu Adeoye, University of Cincinnati
Background: A simple, easily adoptable scale with good performance characteristics is needed for EMS providers to appropriately triage suspected stroke patients to comprehensive stroke centers (CSC). Many existing tools are complex, require substantial training, or have not been prospectively validated in the prehospital setting. We describe the feasibility and effectiveness of prehospital implementation of our previously retrospectively derived and validated Cincinnati Prehospital Stroke Severity Scale (CPSSS) to identify subjects with severe stroke (NIHSS ≥15) among all prehospital patients with clinical suspicion of stroke/TIA. Secondarily, we evaluated the tool’s ability to identify subjects with NIHSS ≥10, large vessel occlusion (LVO), or needing services available only at a CSC.
Methods: Without formalized training, Cincinnati Fire Department providers performed standard stroke screening (“face, arm, speech, time” FAST) and CPSSS as part of their assessment of suspected stroke/TIA patients. Outcomes for patients brought to the region’s only CSC or assessed by the regional stroke team were determined through structured chart review by a stroke team nurse. CPSSS test characteristics for each outcome were calculated with 95% confidence intervals.
Results: Complete prehospital and outcome data were available for 58 FAST-positive subjects among 158 subjects with prehospital suspicion for stroke/TIA. Subjects were excluded if FAST was negative (n=22), FAST or CPSSS was incompletely documented (n=24), if the patient was taken to a non-CSC and did not receive a stroke team consult (n=48), or if outcome data were missing (n=6). CPSSS sensitivity and specificity for each outcome were: NIHSS ≥15, 77% (95% CI 46-95) and 84% (95% CI 69-93), NIHSS ≥10, 64% (95% CI 41-83) and 91% (95% CI 76-98), LVO, 71% (95% CI 29-96) and 70% (95% CI 55-83), overall CSC need 57% (95% CI 34-78) and 79% (95% CI 61-91).
Conclusion: In this pilot prospective evaluation performed in the prehospital setting by EMS providers without formalized training, CPSSS is comparable to other published tools in test characteristics and may inform appropriate CSC triage beyond LVO ascertainment alone.

IMPLEMENTATION OF THE CANADIAN C-SPINE RULE BY PARAMEDICS: A SAFETY EVALUATION
Christian Vaillancourt, Manya Charette, Julie Sinclair, Justin Maloney, George A. Wells, Ian G. Stiell, Department of Emergency Medicine, University of Ottawa
Background: The Canadian C-Spine Rule (CCR) was validated by emergency physicians and triage nurses to determine the need for radiography in alert and stable Emergency Department trauma patients. It was modified and validated for use by paramedics in 1,949 patients. The prehospital CCR calls for evaluation of active neck rotation if patients have none of 3 high-risk criteria and at least 1 of 4 low-risk criteria. This study evaluated the impact and safety of the implementation of the CCR by paramedics.
Methods: This single-centre prospective cohort implementation study took place in Ottawa, Canada. Advanced and primary care paramedics received on-line and in-person training on the CCR, allowing them to use the CCR to evaluate eligible patients and selectively transport them without immobilization. We evaluated all consecutive eligible adult patients (GCS 15, stable vital signs) at risk for neck injury. Paramedics were required to complete a standardized study data form for each eligible patient evaluated. Study staff reviewed paramedic documentation and corresponding hospital records and diagnostic imaging reports. We followed all patients without initial radiologic evaluation for 30 days for referral to our spine service, or subsequent visit with radiologic evaluation. Analyses included sensitivity, specificity, kappa coefficient, t-test, and descriptive statistics with 95% CIs.
Results: The 4,034 patients enrolled between January 2011 and August 2015 were: mean age 43 (range 16-99), female 53.3%, motor vehicle collision 51.9%, fall 23.8%, admitted to hospital 7.0%, acute c-spine injury 0.8%, and clinically important c-spine injury (0.3%). The CCR classified patients for 11 important injuries with sensitivity 91% (95% CI 58-100%), and specificity 67% (95% CI 65-68%). Kappa agreement for interpretation of the CCR between paramedics and study investigators was 0.94 (95% CI 0.92-0.95). Paramedics were comfortable or very comfortable using the CCR in 89.8% of cases. Mean scene time was 3 minutes (15.6%) shorter for those not immobilized (17 min vs. 20 min, p=0.0001). A total of 2,569 (63.7%) immobilizations were safely avoided using the CCR.
Conclusions: Paramedics safely and accurately applied the CCR to low-risk trauma patients. This significantly reduced scene times and the number of prehospital immobilizations required.

PREDICTING SURVIVAL AFTER PEDIATRIC OUT-OF-HOSPITAL CARDIAC ARREST
Ian R. Drennan, Kevin E. Thorpe, Sheldon Cheskes, Muhammad Mamdani, Damon C. Scales, Anne Marie Guerguerian, Laurie J. Morrison, Rescu, Li Ka Shing Knowledge Institute, St. Michael’s Hospital, University of Toronto
Background: Pediatric out-of-hospital cardiac arrest (OHCA) is unique in terms of epidemiology, treatment, and outcomes. There is a paucity of literature examining predictors of survival to help guide resuscitation in this population. The primary objective was to examine predictors of survival to hospital discharge. The secondary objective was to determine the probability of return of spontaneous circulation (ROSC) over the duration of resuscitation.
Methods: A retrospective cohort study of non- traumatic OHCA (<18 years) treated by EMS from the Toronto Regional RescuNET Epistry-Cardiac Arrest database from 2006 to 2015. We used competing risk analysis to calculate the probability of ROSC over the duration of resuscitation. We then used multivariable logistic regression to examine the role of Utstein factors and duration of resuscitation in predicting survival to hospital discharge. Candidate variables were limited to Utstein factors and duration of resuscitation due to the number of events. We used area under the receiver operating characteristic (ROC) curve (AUC) to determine the predictive ability of our logistic regression model.
Results: A total of 658 patients met inclusion criteria. Survival to discharge was 10.2% with 70.1% of those children having a good neurologic outcome. The overall median time to ROSC was 23.9 minutes. (IQR 15.0,36.7). However, the median time to ROSC for survivors was significantly shorter than the time to ROSC for patients who died in hospital (15.9 min. (IQR 10.6 to 22.8) vs. 33.2 min. (IQR 22.0 to 48.6), P value <0.001). Older age (OR 0.9, 95% CI 0.86,0.99), and longer duration of resuscitation (OR 0.9, 95% CI 0.88,0.93) were associated with worse outcome while initial shockable rhythm (OR 5.8, 95% CI 2.0,16.5), and witnessed arrests (OR 2.4, 95% CI 1.10,5.30) were associated with improved patient outcome. The AUC for the Utstein factors was fair (0.77). Including duration of resuscitation improved the discrimination of the model to 0.84.
Conclusion: Inclusion of duration of resuscitation improved the performance of our model compared to Utstein factors alone. However, our results suggest there are a number of other important factors for predicting patient outcome.

POST-RESUSCITATION HYPEROXIA AND HYPOXIA ARE ASSOCIATED WITH INCREASED CARDIAC ARREST MORTALITY
Henry E. Wang, Jim Christenson, et al. University of Alabama at Birmingham
Background: Prior studies have examined the associations between hyperoxia, hypoxia and survival after cardiac arrest, with conflicting results. We sought to determine if hyperoxia and hypoxia in the first 24 hours after return of spontaneous circulation (ROSC) are associated with increased mortality in adult out-of-hospital cardiac arrest (OHCA).
Methods: We used multicenter data from the Resuscitation Outcomes Consortium. We included all adult OHCA with sustained ROSC ≥1 hour after Emergency Department arrival. We excluded patients with no arterial blood gas (ABG) measurements. Using all ABG collected during the first 24 hours of hospitalization, we defined hyperoxia as PaO2 ≥300 mm Hg and hypoxia as PaO2 <60 mm Hg. We identified hyperoxia and hypoxia for the initial, final and any ABG. Using logistic regression, we fit a series of multivariable models evaluating the associations between hyperoxia and hypoxia and hospital mortality, adjusted for age, sex, witnessed arrest, bystander chest compressions, initial electrocardiogram rhythm and ROC clinical site.
Results: Of 57,383 treated OHCA, we included 9,186 in the analysis. Hospital mortality was 60.1%. Patients received a median of 3 ABG measurements (IQR: 2-5). Mean time to first ABG was 1.9±2.8 hours. Initial, final and any hyperoxia occurred in 1,753 (19.1%), 551 (6.0%) and 2,465 (26.5%), respectively. Initial, final and any hypoxia occurred in 774 (8.1%), 705 (7.7%) and 1,743 (19.0%), respectively. On multivariable analysis, initial hyperoxia was not associated with hospital mortality (adjusted OR 1.10, 95% CI: 0.97-1.26), final (1.59, 1.25-2.02) and any hyperoxia (1.25, 1.11-1.41) were associated with increased hospital mortality. Initial (adjusted OR 1.57, 95% CI: 1.25-1.97), final (1.64, 1.40-1.92) and any hypoxia (1.72, 1.51-1.97) were associated with increased hospital mortality.
Conclusions: In adult OHCA, initial hyperoxia is not associated with hospital mortality. Hyperoxia at other time points within the first 24 hours after ROSC is associated with increased mortality. Hypoxia at any time within the first 24 hours after ROSC is associated with increased mortality.

PREHOSPITAL USE OF NASAL CANNULA END-TIDAL CO2 MONITORING IN NON-INTUBATED MAJOR TRAUMATIC BRAIN INJURY PATIENTS
Octavio Perez, Daniel W. Spaite, Eric Helfenbein, Bruce J. Barnhart, Saeed Babaeizadeh, Chengcheng Hu, Vatsal Chikani, Joshua B. Gaither, Kurt R. Denninghoff, Samuel M. Keim, Chad Viscusi, Duane Sherrill, Amber D. Rice, Bentley J. Bobrow, University of Arizona
Background: Little is known about end-tidal CO2 monitoring using nasal cannula sensors in non- intubated patients (NC-ETCO2). Our objective was to describe the patterns of NC-ETCO2 seen during the prehospital care of spontaneously-breathing, major Traumatic Brain Injury (TBI) patients.
Methods: Continuous NC-ETCO2 data were evaluated from a subset of non-intubated, major (moderate/severe) TBI cases (April 24, 2013-April 20, 2015) in the Excellence in Prehospital Injury Care (EPIC) TBI Study (NIH-1R01NS071049). Cases from 4 EMS agencies that are reporting monitor data (Philips-MRxTM) were included. Descriptive statistics were used to evaluate case and NC-ETCO2 attributes.
Results: Among the 56 cases included (median age-51, 63% male), 91% had median respiratory rate (RR) >15/min and 57% had median RR >20. Six cases (11%) had a median NC-ETCO2 45mmHg. Several common NC-ETCO2 patterns emerged: 1) while the final level varied among patients, the vast majority of cases (46, 82%) attained a stable “plateau” with relatively small variation after that point, 2) ETCO2 often “ramped up” from <10mmHg to the plateau during the initial 10-30 sec of monitoring (24, 43%), 3) many patients (34, 61%) had near-normal (30-34mmHg) or even normal (35-45mmHg) ETCO2 plateaus. In 70% of the cases with normal ETCO2 plateaus, these levels were maintained despite high RR and/or dramatic variations in RR.
Conclusion: We believe this is the first report of the use of NC-ETCO2 monitoring in non-intubated EMS patients. After initial “ramp up,” the vast majority of cases achieved stable readings even though dramatic variations in RR were common. Most patients (57%) were spontaneously hyperventilating at rates more than twice “normal.” Despite this, over half of them had ETCO2 plateaus that were normal or near-normal throughout their course. It is unclear whether the “very low” ETCO2 readings (<30mmHg) represent true physiological hypocapnea or simply “washout” due to the sensors being in open ventilatory space (the naris) with continuous ambient O2 flow. Future studies comparing NC-ETCO2 to measured arterial pCO2 are needed to identify the accuracy of NC-ETCO2 as a tool for evaluating ventilatory physiology in EMS.

TO TUBE OR NOT TO TUBE? ALS PROVIDERS PERSPECTIVES’ ON PEDIATRIC INTUBATION AND AIRWAY MANAGEMENT IN THE FIELD
Sylvia Owusu-Ansah, Asa Margolis, Jennifer Anders, Jason David Cruz, Nelson Tang, Matthew Levy, Johns Hopkins School of Medicine, Emergency Medicine, EMS Fellowship
Background: To describe Advanced Life Support (ALS) providers’ prospective on educational and clinical practice considerations surrounding prehospital management of the pediatric airway. Pediatric endotracheal intubation (pETI) by prehospital providers remains an area of debate. There is a paucity of data demonstrating improved outcomes when ALS providers perform pETI. There is a need for research which evaluates the opinions and perspectives of the ALS providers performing the intubations. *
Methods: This was a prospective study using a structured questionnaire to interview EMS providers within the state of Maryland. ALS providers with pETI in their scope of practice were included within the study. Incomplete interviews were excluded. ALS providers were provided a 23-question survey. Topics included pETI, video laryngoscopy (VL), and pediatric rapid sequence intubation (RSI). Each interview was transcribed and audio recorded by the same researcher, and then reviewed by the researcher and an independent reviewer. Three types of questions were asked in the questionnaire: open-ended, semi- structured and Likert scale questions. Statistics were calculated using Microsoft Excel.
Results: 79 ALS providers were interviewed throughout the state of Maryland representing all five regions of the state. The median age was 40-50 years old, and median years of experience were 5-19 years. 90% were EMT- Ps. Of 47 ALS providers who have intubated children 69% stated they have been successful with 100% of their pediatric intubations. 88% of providers felt VL would be helpful. 100% of ALS providers interviewed were of the opinion that pediatric intubation should be within their scope of practice. 100% of the ALS providers reported they would benefit from additional training focused on pediatric intubation, airway management and other pediatric emergencies.
Conclusions: These ALS providers want ongoing pediatric airway training. Focus on use of high fidelity tools, operating room access, education by pediatric airway specialists, may prove beneficial in providing ALS personnel with the additional training they desire. Videolaryngoscopy may be a helpful tool to successfully manage the pediatric airway. The perspectives of the ALS providers themselves represent an important component in the evaluation of education and clinical practice relating to pediatric airway management.

BASIC LIFE SUPPORT VERSUS ADVANCED LIFE SUPPORT: COMPARISON OF NEUROLOGICAL OUTCOME IN OUT-OF-HOSPITAL CARDIAC ARREST BASED ON INITIAL RHYTHMS
Patrick Medado, et al., Department of Emergency Medicine, Wayne State University
Background: Previous studies suggest that basic life support (BLS) care may be more effective than advanced life support (ALS) in out-of-hospital cardiac arrests (OHCA) when analyzed for neurological outcomes at discharge. It is unclear, however, if BLS or ALS transfer impacts the outcome of OHCA patients when the initial rhythm is either Ventricular Fibrillation (VF) or Ventricular Tachycardia (VT). This study evaluates the neurological outcomes of patients receiving either ALS or BLS care stratified by initial rhythm.
Methods: This is a retrospective cohort study of 1,596 patients who experienced non- traumatic OHCA in Detroit, MI and were enrolled in the Cardiac Arrest Registry to Enhance Survival (CARES) from January 1, 2014 to December 31, 2015 as part of the Save-MI-Heart initiative. All patients were initially categorized into two cohorts by initial rhythm, shockable (VF/VT) and unshockable (all other pulseless rhythms). The cohorts were subsequently clustered based on ALS and BLS care. For all cases, resuscitation was attempted and neurological outcome was evaluated at hospital discharge for good cerebral performance (CPC 1 or 2) or moderate cerebral disability (CPC > 2). Fisher’s exact and chi- squared tests were used to determine statistical difference. Results: Of the 1,596 OHCA patients, 56% were African Americans and 55% were male with a mean age of 60.2 years (SD ±20.7). Within the unshockable cohort, 8 of 653 (1.2%) ALS patients and 27 of 763 (3.5%) BLS patients were discharged with CPC 1 or 2 (p<0.01). In the shockable cohort, 2 of 60 (3.3%) ALS patients and 4 of 120 (3.3%) BLS patients were discharged with CPC 1 or 2 (p=1.00). On average, the time of call to time in the ED was about 10 minutes longer in the ALS group.
Conclusions: CARES registry data shows BLS care to have better neurological outcome at hospital discharge than ALS in the unshockable cohort. The results do not provide evidence that patients experiencing OHCA with an initial rhythm of VF/VT have better neurological outcomes when receiving BLS or ALS care.

HELICOPTER USAGE ON OVERALL MORTALITY IN OUT-OF-HOSPITAL INJURIES: IS THERE AN IMPACT FROM CHANGE IN UTILIZATION AFTER A CRASH?
Novneet Sahu, David Leventhal, Mia Papas, Brittany Cheadle, Ross Megargel, Christiana Care Health System
Background: Helicopter crashes spark a debate on the association between helicopter usage and mortality among patients with out-of-hospital injuries. Delaware implemented its trauma system in 2000 with a decrease in mortality rates and an increase in usage of helicopters for transport. In 2008 a helicopter crash killing the crew and two patients drastically changed usage patterns. Since then, helicopter usage has reduced dramatically in Delaware. The purpose of this study is to determine the odds of mortality for patients with severe trauma (ISS>24) transported via air versus ground. A change in the mortality odds ratio is expected with decreased helicopter usage. Methods: This is a retrospective study of de-identified data from the Delaware State Trauma Registry during a 14-year time period between 1999 and 2012 representing 3,621 patients. Differences in demographic and clinical characteristics including age, gender, race/ethnicity, injury type, ISS score, anti-coagulant use, hospital or ICU length of stay, place of injury and type of trauma center were examined with a chi-square for categorical variables and a t-test for continuous variables. A Mann Whitney U test was performed to explore mean differences between non-normally distributed variables. Multivariable logistic regression models were employed to examine the association between transportation mode (helicopter vs. ambulance) and mortality. The data was analyzed in two time periods: from 1999-2008 and 2009-2012. The models were used to compute the odds of mortality.
Results: After adjusting for the demographic and clinical characteristics, the odds of mortality did not change if transported by helicopter compared to ambulance (OR = 1.35, 95% CI: 0.968, 1.886). The odds of mortality for helicopter compared to ambulance were similar in the early years (OR =1.40, 95% CI: 1.02 to 1.92) compared to the later years (OR: 1.47, 95% CI: 0.809 to 2.66).
Conclusions: The association between transportation mode and mortality remained the same despite changes in helicopter usage. This adds to the debate of trauma helicopter usage and further study in a wider geographic area is necessary to understand the risks and benefits of transportation mode.

NEUROLOGICAL FAVORABLE OUTCOME ASSOCIATED EMS COMPLIANCE WITH ON-SCENE RESUSCITATION TIME PROTOCOL
Tae Han Kim, Ki Jeong Hong, Sang Do Shin, Gwan Jin Park, Eui Jung Lee, Young Sun Ro, Kyoung Jun Song, Joo Jeong, Seoul National University, Department of Emergency Medicine
Backround: Korean national emergency care protocol for EMS providers recommends a minimum of 5 minutes of on-scene resuscitation before transport to hospital in cases of Out-of-Hospital Cardiac Arrest (OHCA). We compared survival outcome of OHCA patients according to scene time interval (STI)- protocol compliance of EMS.
Methods: EMS assessed and treated adult OHCAs with presumed cardiac etiology during 2-year period were analyzed. Non-compliance was defined as hospital transport with STI less than 6 minutes without return of spontaneous circulation (ROSC) on scene. Propensity score for compliance with protocol was calculated and based on the calculated propensity score, 1:1 matching was performed between compliance and non-compliance group. Univariate analysis as well as multivariable logistic model was used to evaluate effect of compliance to survival outcome.
Results: Among total of 28,100 OHCAs, EMS transported 7,026 (25.0%) cardiac arrest without achieving ROSC on scene with STI less than 6 minutes. 6,854 cases in each group were matched using propensity score matching. Overall survival to discharge rate did not differ in both groups (4.6% for compliance group vs. 4.5 for non-compliance group, p=0.78). Adjusted odds ratio of compliance for survival to discharge were 1.12 (95%CI 0.92 – 1.36). More patients with favorable neurological outcome was shown in compliance group (2.5% vs. 1.7%, p<0.01) and adjusted odds ratio was 1.91 (95% CI 1.42 – 2.59)
Conclusions: Although survival to discharge rate did not differ for patient with EMS non-compliance with STI protocol, lesser patients survived with favorable neurological outcomes when EMS did not stay for sufficient time on scene in OHCA before transport.

COMPARING THE ACCURACY OF MASS CASUALTY TRIAGE SYSTEMS IN A PEDIATRIC POPULATION
Robert W. Heffernan, E. Brooke Lerner, Courtney H. McKee, Lorin R. BrowneJ. Marc Liu, Richard B. Schwartz, Medical College of Wisconsin
Background: Until recently, it was not possible to compare accuracy between different mass casualty triage systems because there was no gold standard for determining the “correct” triage category. An expert panel recently published a gold standard for each triage category allowing triage systems to be compared. Our objective was to compare the accuracy of 4 different mass casualty triage systems (SALT, JumpSTART, TriageSieve, and CareFlight) when used for children.
Methods: We observed the ED triage of patients <18 years presenting to the only pediatric specialty/Level 1 trauma hospital in the county. A single certified EMS provider observed each patient’s initial triage and recorded all findings that were necessary to categorize the patient using each of the four triage systems. These data were used to electronically assign a triage category to each patient for each triage system. Hospital medical records were reviewed for each enrolled patient, and the gold standard triage category was assigned based on the treatments received and final outcome. Descriptive statistics, including 95% confidence intervals (CI), were used to compare accuracy, over- and under-triage rates for each of the triage systems.
Results: 110 subjects were enrolled. Of those, 50.9% were male and 55.5% were transported by ambulance. When compared to the gold standard definitions, SALT was narrowly found to be the most accurate (59.1%, CI: 49.9-68.3) compared to JumpSTART (56.4%, CI:47.1-65.6), CareFlight (55.5%, CI: 46.2-64.7), and TriageSieve (55.5%, CI:46.2-64.7). SALT had the lowest rate of under-triage (34.5%, 95%CI: 25.7-43.4) compared to JumpSTART (39.1%, CI:30.0-48.2), CareFlight (39.1%, CI:30.0-48.2) and TriageSieve (39.1%, CI: 30.0-48.2). SALT had the highest over-triage rate (6.4%, CI: 1.8-10.9) compared to JumpSTART (4.5%, CI:0.7-8.4), CareFlight (5.5%, CI:1.2-9.7) and TriageSieve (5.5%, CI: 1.2-9.7). For each triage algorithm, the most common error was designating a patient as “minimal” that should have been triaged as “delayed” according to the gold standard.
Conclusions: We found that the four most popular mass casualty triage systems performed similarly in an ED-based pediatric population. None of them was extremely accurate, with the rate of under-triage unacceptable. Differentiating between “minimal” and “delayed” should be the focus of future studies, since this was the most common inaccuracy.

DECLINE IN CHEST COMPRESSION VELOCITY OVER TIME IS RELATED TO OUT-OF-HOSPITAL CARDIAC ARREST PATIENT OUTCOME
Samuel Berger, Robyn McDannold, Margaret Mullins, University of Arizona
Background: High chest compression release velocity (CCRV) has been independently associated with improved survival from out-of-hospital cardiac arrest (OHCA). Previous studies demonstrate that repeated compressions soften the chest and may contribute to changes in compression dynamics. We assessed the change in CCRV over time and its association with favorable outcomes. Methods: CCRV was measured using an accelerometer connected to a defibrillator (E/X Series, ZOLL Medical) during adult OHCA resuscitations from 2 EMS agencies in Arizona between 10/2008 and 06/2015. All subjects had at least 20 compressions and compression duration of at least 1 minute within the first 10 minutes. Mean CCRV was summarized for the first and second 5-minute intervals and the rate of change in CCRV between intervals was estimated via linear regression. Patient-level mean and rate of change were summarized within each subgroup by the median and interquartile range, and were compared across subgroups by the Kruskal-Wallis test. All tests were two-sided with significance level of 0.05.
Results: A total of 1,308 subjects were included. For each subject, the median number of compressions was 785 and median compression duration was 518.9 seconds. CCRV was significantly higher during minutes 0-5 in patients with survival to hospital discharge compared to non-survivors [373 mm/sec (IQR 315-412) vs. 338 (289-392), p=0.0013] and in patients with favorable (CPC=1 or 2) compared to unfavorable neurological outcome [378 (319-414) vs. 338 (289-392), p=0.001]. CCRV remained significantly higher for survivors compared to non-survivors in minutes 5-10 [355 (294-391) vs. 332 (285-382), p=0.02] but was similar for neurological outcome [353 (293-385) vs. 332 (285-382), p=0.2]. The decline in CCRV was greater in patients with survival to discharge, favorable neurological outcome, and ROSC (p<0.05).
Conclusions: CCRV is highest throughout the first 10 minutes of CPR for OHCA survivors and patients with favorable neurological outcome. However, the rate of decline in CCRV is greater for patients with positive outcomes, possibly due to significantly higher CCRV in minutes 0-5.

RIGID CERVICAL COLLAR DOES NOT AFFECT CEREBRAL BLOOD FLOW INDEX, BUT POSITIONING DOES
David A. Wampler, Brian Eastridge, Ronald Stewart, Rena Summers, Brandi Wright, Ali Seifi, The University of Texas Health Science Center at San Antonio
Background: The process of protecting the spine during EMS transport after a traumatic event has become a focus of debate. Historically, patients at risk for spine fracture or spinal cord injury have been placed in a rigid cervical collar (c-collar) and positioned supine on a rigid long spine board with the head secured with foam headblocks. To maximize effectiveness, the c-collar must be properly fitted, so as to provide the needed support without impinging upon blood flow or airflow. The goal of this project was to study the effect of c-collar and subject positioning on cerebral blood flow.
Methods: This was conducted as a serial n-of-one study, where each subject was subjected to multiple conditions using a non-invasive cerebral blood flow monitor to establish cerebral blood flow index (CBFI). The CBFI data were then collected: standing, sitting up, 45 degrees, 30 degrees, 10 degrees, and supine, with and without c-collar. Descriptive statistics were used for CBFI in each condition, t-test was utilized to identify significant change in CBFI.
Results: Five volunteers were recruited. Subjects were tested in 6 positions each with or without c-collar. Mean age was 49 ±15 years and 60% male. The CBFI mean of means was 71.0 with the c-collar and 69.4 without the c-collar. Only one subject demonstrated a statistically significant difference in CBFI with c-collar. The CBFI mean of means for position was 72.6 for head of bed greater than 30 degrees and 68.1 for less than 30 degrees. All subjects demonstrated >99% confidence for a statistically significant difference in CBFI when dichotomized using head of bed at 30 degrees.
Conclusions: Elevation of the head of the bed has greater influence on CBFI than does the c-collar. While this may not be clinically significant in healthy volunteers, this change in cerebral perfusion may have clinical significance in a TBI population. Continued research is needed to identify best practice for patient positioning during transport to protect the cervical spine and optimize cerebral perfusion. The current cervical spine motion restriction practice of supine and secured on a long spine board may be suboptimal.

THE USE OF FENTANYL IN PEDIATRIC TRAUMA PATIENTS POST-IMPLEMENTATION OF THE HANDTEVY FIELD GUIDE
Lara Davidovic Rappaport, David Edwards, Kevin McVaney, Aaron Eberhardt, Whitney Barrett, Kathleen Adelgais, Denver Health Hospital, University of Colorado
Background: There are multiple barriers to prehospital analgesia administration for traumatic pain to children. Younger children are often not treated with the same frequency as older children and barriers cited include the need for intravenous access, fear of complications, and fear of giving the incorrect dose. The introduction of a field guide with customized dosing recommendations for intravenous and intranasal opioid delivery that is age specific may improve treatment of pain on the prehospital setting. The objective of our study was to evaluate the change in prehospital fentanyl administration to children after the introduction of the Handtevy field guide in our hospital-based medical Emergency Medical Service (EMS) system.
Methods: We conducted a data analysis of patients entered into our EMS patient care registry 1 year before and after introduction of the Handtevy system in July 2015. We examined the care delivered to patients aged 0-14 years with trauma who were transported by our EMS agency. We compared the difference in treatment proportions between the two time periods and examined if age and route (intravenous verses intranasal) were associated with change in administration of fentanyl.
Results: During the study period, a total of 3,419 patients met enrollment criteria, 1,649 patients were transported before and 1770 patients transported after the introduction of the Handtevy System. The two groups were similar with regard to age and gender. The proportion of patients treated with fentanyl increased after the pain field guide implementation (13.2% vs. 17.9%, P<.01). The population with the greatest increase was among children 0-5 years receiving intranasal fentanyl (OR 4.1, 95% CI 1.9, 8.6).
Conclusions: The introduction of the Handtevy field guide with pre-calculated doses of opiate medication increased analgesia delivery to our patients age < 5 years, an age group often not treated in the prehospital setting. Future research is needed to identify ways to maximize appropriate pain treatment for children in the out-of-hospital setting.

NEW INSIGHTS INTO CPR OPTIMIZATION THROUGH A FLOW PER COMPRESSION ANALYSIS OF CHEST COMPRESSION GENERATED BLOOD FLOW
Joshua Lampe, Tai Yin, George Bratinov, Wes Shoap, Yuxi Lin, Christopher Kaufman, Lance Becker, Northwell Health
Background: During cardiac arrest and CPR, the two primary pumps of the cardiovascular circuit (the right and left ventricles) are supplanted by a single pump, the chest compression. In pre-clinical experiments, blood flows are reported as the amount of flow per minute (L/min). Characterizing flow on a per minute basis is highly dependent on chest compression (CC) rate. Analysis of flow generated by each compression (“cardiac output” in L/comp) could reveal important information relevant to lung perfusion and how the circuit as a whole is responding to CPR. It was hypothesized that increasing CC rate would improve per minute blood flow but reduce flow on a per compression basis.
Methods: CPR was performed on nine domestic swine (~30 kg) using standard monitoring. Flow was measured in the right common carotid and abdominal aorta. Ventricular fibrillation was electrically induced. Mechanical CC were started after ten minutes of untreated VF. CC were delivered at rates of 50, 75, 100, 125, or 150 compressions per minute (cpm) and a depth of 2” for a total of 54 min. CC rates were changed every 2 min and were randomized.
Results: Across all rates, per minute flows were strongly rate dependent in minutes 0-10, favoring fast rates, and were weakly rate dependent in minutes 10-20, favoring rates near 100 CPM. Across all rates, per compression flow had a weaker rate dependence in minutes 0-10, favoring rates less than 100 CPM, but during minutes 10-20 had a strong rate dependence, also favoring rates less than 100 CPM.
Conclusions: While CC rates greater than 100 cpm may provide more total blood flow early during CPR, analysis of blood flow per compression indicates that they can become detrimental after 10 minutes of ongoing CPR. During a prolonged resuscitation, a longer time after a given compression might be needed for venous refill thereby favoring rates of 100 cpm or less.

CLASSIFICATION OF BYSTANDER CPR USING 911-CALL REVIEW VERSUS FIELD REPORT
Stephen G. Sanko, Huihui Zhang, Christianne Lane, Abe Flinder, Pavan Reddy, Luci Cassella, Sumeet Sidhu, Jay Balagna, Marc Eckstein, Department of Emergency Medicine, Keck School of Medicine, USC, Los Angeles Fire Department
Background: Bystander CPR is usually captured by paramedic observation on scene, though some prehospital systems are starting to classify this based on 911 call review. The objective was to compare rates of bystander CPR using 911-calls compared to paramedic report on scene.
Methods: This was a retrospective review of Los Angeles Fire Department (LAFD) 911-calls and electronic health records for cases of LAFD-attended OOHCA with attempted resuscitation from January to March 2014, as well as January to March 2015. Trained non-LAFD abstractors listened to all recorded calls, and documented if chest compressions were initiated. Field personnel are supposed to document if bystander chest compressions were initiated either through bystander interrogation or direct observation. The primary outcome was inter-rater agreement (Cohen‘s Kappa test) between 911-call reviewers and field personnel for the presence of bystander CPR. Results: Of 1027 calls during the study period, 13 were unavailable, and 372 calls met one or more exclusion criteria, leaving 642 calls. The overall bystander CPR rate on 911-call review was 59.3%, while it was 52.1% on field care reports, for a primary outcome kappa value of 0.37 (95% CI 0.29-0.44), indicating fair agreement. In 182 cases (31.4%), the records from 911 call review and field report were discordant regarding the presence of bystander CPR: 12% of cases had no bystander CPR on 911 call review but were reported by field personnel to have bystander CPR, and 19% of cases had bystander CPR on 911 call review but not by field personnel. A similar total percentage of discordance was noted among both English-speaking callers (31.3%, =0.35) and limited- English proficiency callers (32.8%, =0.37), patients in residential (30.0%, =0.40) vs. public settings (31.8%, =0.39), patients under 65 (34.5%, =0.30) and over 65 years old (28.4%, =0.43), and in minority (33.8%, =0.33) and white non-Hispanic patients (22.2%, =0.52).
Conclusions: Inter-rater agreement on the presence of bystander CPR using 911-call review and field report is only fair, and up to one-third of cases may be misclassified. Given the extraordinary resources dedicated promoting bystander-CPR, clearer consensus should be developed on how to accurately measure community bystander CPR rates.

A SYSTEMATIC REVIEW OF THE ASSOCIATION BETWEEN EMS TIME FACTORS AND SURVIVAL
Ian Blanchard, Alka Patel, Dan Lane, Amy Couperthwaite, Dirk Chisholm, Dean Yergens, Diane Lorenzetti, Gerald Lazarenko, Eddy Lang, Christopher Doig, William Ghali, Alberta Health Services, University of Calgary
Background: EMS time factors such as prehospital, activation, response, scene and transport intervals have been used as a measure of EMS system quality with the assumption that shorter EMS time factors save lives. The objective of this study was to conduct a systematic review of the association between EMS time factors and survival to determine if, for adults and children accessing ground EMS (population), operational time factors (intervention and control) were associated with survival at hospital discharge (outcome). These results may be important for making evidence-based operational decisions, such as performance metrics and care strategies involving closest hospital bypass.
Methods: Medline, EMBASE, and CINAHL were searched up to January, 2015 for articles reporting original data that associated EMS operational time factors (prehospital, activation, scene, and transport intervals) and survival. Response interval and combined activation and response interval studies retrieved by this search have been previously reported. Conference abstracts and non-English language articles were excluded. Two investigators independently assessed the candidate titles, abstracts, and full text with discrepant reviews resolved by consensus.
Results: A total of 10,151 abstracts were screened for potential inclusion in this review. Of these, 199 articles were reviewed in full-text, and 24 met inclusion criteria. All articles were observational study designs with 10 (41.7%) reporting time factors within the primary analysis, and the remainder assessing time factors as a secondary analysis. Of the primary analysis studies, eight used multivariable analysis, and three of these found statistically significant findings. One study reported shorter combined scene and transport intervals associated with increased survival in acute heart failure patients. Two studies in trauma patients had somewhat conflicting results with one study reporting shorter prehospital interval associated with increased survival whereas the other reported increased survival associated with longer scene and transport intervals. Conclusions: Our systematic review identifies some studies that demonstrate an association between shorter EMS time factors and increased survival. However, this finding is not consistent across studies, and there is both clinical and statistical heterogeneity. Interpretive caution is required before translating these findings to practice.

PEDIATRIC OUT-OF-HOSPITAL CARDIAC ARREST IN SOUTHERN ONTARIO
Ian R. Drennan, Sheldon Cheskes, Kevin E. Thorpe, Laurie J. Morrison, Rescu, Li Ka Shing Knowledge Institute, St. Michael’s Hospital, University of Toronto
Background: Pediatric out-of-hospital cardiac arrest (OHCA) is a rare but devastating event, with few survivors. Most cardiac arrest literature is focused on adult patients. There is little known about the pediatric population. Our objective was to describe the incidence, demographics, treatment, and outcomes of OHCA in southern Ontario.
Methods: A retrospective cohort study of consecutive non- traumatic OHCA (<18 years) from the Toronto Regional RescuNET Epistry Cardiac Arrest database. Cases were included between 2006 and 2015. We calculated the incidence of OHCA and used descriptive analyses to examine the characteristics, treatment processes, and outcomes of cardiac arrest care.
Results: A total of 792 non-traumatic pediatric OHCA occurred during the study period, 667 (84%) treated by EMS. The incidence of OHCA was 6.3/100,000 population/year, and the incidence of treated OHCA was 5.3/100,000 population/year. The mean age (standard deviation) for treated OHCA was 5.4 (+/- 6.4) with 60% male. Only 15% of OHCA occurred in a public location and the majority of cases (67%) were unwitnessed, with 27% witnessed by bystanders and 6% by EMS. Bystander CPR was performed in 47% of OHCA, however AEDs were applied in only 12 (1.5%) cases. EMS response time was on average 6.1 minutes (+/-2.3) and only 10% of patients were found with initial shockable rhythm. The majority of cases were attended to by advanced life support providers (90%), had epinephrine administration (60%) and an advanced airway (68%). Of cases with CPR quality measures (n=156) paramedic CPR was consistent with guideline recommendations with a median chest compression rate of 107/min (IQR 100,125), a median chest compression fraction of 0.76 (IQR 0.65,0.85) and a median depth of 4.2cm (IQR 3.4cm,4.9cm). Of the treated OHCA 40% had a return of spontaneous circulation, and 10% survived to hospital discharge, 78% of these patients with good neurological outcome (mRS<3). Survival to hospital discharge for bystander witnessed OHCA with an initial shockable rhythm was 45%, 83% with good neurological outcome (mRS<3).
Conclusions: The incidence of OHCA in this study was low. Although we noted many factors typically associated with poor outcome, the survival rates for OHCA were similar to that of the adult population.

EXPANDING NALOXONE AVAILABILITY TO FIRST RESPONDERS AND EMTS: TREATING MORE PATIENTS FASTER
David W. Strong, Sean Larkins, Robert B. Dunne, Wayne State University, DMC
Background: In response to the opioid overdose crisis, advocates are working to increase the availability of naloxone to first responders and to the public. As of October of 2015 the state legislature mandated naloxone be carried on all licensed EMS vehicles, leading to the development of a protocol for medical first responders (MFRs) and Basic Life Support Emergency Medical Technicians (BLS-EMTs) to treat suspected opioid overdose with intranasal naloxone. We hypothesized that the protocol change would result in an increased number of patients being treated for opiate overdose. We hypothesized that more rapid delivery would be associated with improved clinical response.
Methods: The majority of transport in the system is by BLS-EMT ambulance. MFR engines also respond to the highest priority calls. The nearest ambulance is dispatched to a call regardless of provider level. This is a retrospective chart review of the six-month period prior to the institution of the MFR/BLS naloxone protocol and a weekly review of the six-month period after protocol adoption. The prehospital patient care record for each patient treated with naloxone was reviewed. A positive response to naloxone was defined as an increase in Glascow Coma Score ≥6 or an increase in respiratory rate ≥6 from a baseline rate ≤8. Secondary measures include the amount of time from dispatch to arrival at patient, dispatch to naloxone delivery, and route of naloxone delivery.
Results: 150 and 336 cases were analyzed for the periods before and after institution of the new protocol respectively. There were 25 MFR and 159 BLS administrations of naloxone after the new protocol. 53% and 65% of patients had a positive clinical response in respective to each period (p<0.05). Shorter dispatch to treatment time was associated with a higher clinical response rate, 17 minutes vs. 14 minutes (p<0.01).
Conclusions: Naloxone protocol for MFR and BLS providers more than doubled the number of patients treated in our predominately BLS system. Overall clinical response rate also improved, which may be the result of focused training on the new protocol and faster time to treatment. More study is warranted on expanded naloxone use.

IMPLEMENTATION OF AN EDUCATIONAL PROGRAM TO IMPROVE THE CARDIAC ARREST DIAGNOSTIC ACCURACY OF AMBULANCE COMMUNICATION OFFICERS: A CONCURRENT CONTROL BEFORE-AFTER STUDY
Christian Vaillancourt, Ann Kasaboski, Manya Charette, Lisa A. Calder, Loree Boyle, Shunichiro Nakao, Denis Crete, Molly Kline, Robin Souchuk, Niels Kristensen, George A. Wells, Ian G. Stiell, Department of Emergency Medicine, University of Ottawa
Background: Most ambulance communication officers receive minimal education on agonal breathing, often leading to unrecognized out-of-hospital cardiac arrest (OHCA). We sought to evaluate the impact of an educational program on cardiac arrest recognition, and on bystander CPR and survival rates. We also sought to determine if the intervention could result in chest compressions and injuries for patients not in OHCA.
Methods: Ambulance communication officers in Ottawa, Canada received additional training on agonal breathing, while the control site (Windsor, Canada) did not. Sites were compared to their pre-study performance (before-after design), and to each other (concurrent control). Trained investigators used a piloted-standardized data collection tool when reviewing the recordings for all potential OHCA cases submitted. OHCA was confirmed using our local OHCA registry, and we requested 911 recordings for OHCA cases not initially suspected. Two independent investigators reviewed medical records for non-OHCA cases receiving telephone-assisted CPR in Ottawa. We present descriptive and chi-square statistics.
Results: There were 988 confirmed and suspected OHCA in the “before” (540 Ottawa, 448 Windsor), and 1,076 in the “after” group (689 Ottawa, 387 Windsor). Characteristics of “after” group OHCA patients were: mean age (68.1 Ottawa, 68.2 Windsor), Male (68.5% Ottawa, 64.8% Windsor), witnessed (45.0% Ottawa, 41.9% Windsor), and initial rhythm VF/VT (Ottawa 28.9, Windsor 22.5%). Before-after comparisons were: for cardiac arrest recognition (from 65.4% to 71.9% in Ottawa p=0.03, from 70.9% to 74.1% in Windsor p=0.37), for bystander CPR rates (from 23.0% to 35.9% in Ottawa p=0.0001, from 28.2% to 39.4% in Windsor p=0.001), and for survival to hospital discharge (from 4.1% to 12.5% in Ottawa p=0.001, from 3.9% to 6.9% in Windsor p=0.03). “After” group comparisons between Ottawa and Windsor (control) were not statistically different, except survival (p=0.02). Agonal breathing was common (25.6% Ottawa, 22.4% Windsor) and present in 18.5% of missed cases (15.8% Ottawa, 22.2% Windsor p=0.27). In Ottawa, 31 patients not in OHCA received chest compressions resulting from telephone-assisted CPR instructions. None suffered injury or adverse effects.
Conclusions: While all OHCA outcomes improved over time, the educational intervention significantly improved OHCA recognition in Ottawa, and appeared to mitigate the problem of agonal breathing.

ADDITIONAL LIVES SAVED BY THE RESUSCITATION OUTCOMES CONSORTIUM: CARDIAC ARREST
James J. Menegazzi, Jessica Ng, University of Pittsburgh
Background: The National Institutes of Health, the Canadian Institutes of Health Research, and other agencies, established the Resuscitation Outcomes Consortium (ROC) to conduct clinical trials of out-of- hospital cardiac arrest (OHCA) and trauma resuscitation. ROC was comprised of 11 Regional Clinical Centers and a Data Coordinating Center, as well as over 250 high-functioning EMS systems. We sought to estimate if and how many additional lives may have been saved by ROC.
Methods: We analyzed all non-redundant results from ROC clinical trials to determine the actual number of survivors to hospital discharge. We used the first ROC Epistry- Cardiac Arrest report (Nichol, et al. 2008), which was prior to any ROC clinical trail, as the expected rate of survival. We determined how many additional lives were saved (observed) compared to the baseline rate of 7.9% overall survival. For the ALPS trial we used the rate of 21% survival, as these were all cases with ventricular fibrillation. We analyzed only EMS-treated cases from clinical trials and inter-trial gaps that came after the 2008 report.
Results: There were 61,364 cases of EMS-treated cardiac arrest, of which 49,466 were enrolled in ROC clinical trials and post-2008 gaps. The ALPS trial enrolled 3,023 of these. The expected number of survivors for all ECG rhythms was 3,668 in the 46,440 non-ALPS studies. The observed number of survivors was 4334 (666 additional survivors). In the ALPS trial the expected number of survivors was 635, and the actual number of survivors was 702 (67 additional survivors).
Conclusions: In the period following the 2008 report, we estimate that an additional 733 lives of OHCA patients were save in the ROC EMS systems.

HELICOPTER VS. GROUND INTER-FACILITY TRANSFER OF STEMI PATIENTS
Francis X. Guyette, Patrick Williams, Denisse Sequerria, Christian Martin-Gill, University of Pittsburgh
Background: The gold standard of care for patients who are having a ST-segment elevation Myocardial infarction is Percutaneous Coronary intervention within 90 minutes of the onset of symptoms. Many patients however, do not present to a hospital facility with this capability and must then be transferred via ambulance to a facility with these capabilities. We aim to compare the patient characteristics and outcomes STEMI patients transported by air ambulance to those transported by ground EMS.
Methods: We performed a retrospective chart review of Prehospital and In-hospital Data of 217 STEMI cases from 2014 through 2015. These cases were then divided into two treatment groups: STEMI patients that were transferred by helicopter ambulance and those transferred via ground ambulance. We characterized patients by initial vital signs, need for vasopressors or advanced airway support. In-hospital data was used to determine peak troponin levels (area under the curve), Day 1 MSOFA score, Ejection fraction, and in-hospital mortality for each treatment group. Prehospital Data was used to determine, transport time, distance, and crew configuration for each group. The two treatment groups were then compared using regression.
Results: There were a total of 212 STEMI patients transferred in our regional health system during this time period. We transported 156 (73%) by air and 56 (27%) by ground. The patients were 39% Female and averaged 63.5 (SD13.5) years of age. The air and ground cohorts did not differ with respect to age, gender, or initial vital signs (SBP, HR, GCS). Air transports were longer 62 min (IQR 46,71) vs. ground 37min (IQR 33, 43) and mortality was higher 9.6% vs. 3.6%. When adjusted for time, use of vasopressors, modified organ failure assessment, hypotension, and mode of transport there was no difference in mortality or post PCI ejection fraction.
Conclusions: STEMI patients transported by air in our regional health system had a higher mortality and required longer transports then the ground cohort. Mortality and ejection fraction did not differ when adjusted for severity of illness.

ASSESSMENT OF PAIN MANAGEMENT DURING TRANSPORT OF INTUBATED PATIENTS IN A PREHOSPITAL SETTING
Ayesha Zia, Russell MacDonald, Sean Moore, James Ducharme, Christian Vaillancourt, University of Ottawa, Department of Emergency Medicine
Background: While methods have been developed to assess pain and provide analgesia to hospitalized intubated patients, little is known about current EMS practices in providing similar care during air and land medical transfers. Therefore, we sought to determine if opioid analgesia is provided to intubated patients during transportation in out-of-hospital setting.
Methods: We conducted a health record review examining electronic records of intubated patients transported by Ornge in 2015. Ornge is the exclusive provider of air and land transport of critically ill patients in Ontario, Canada with over 18,000 transports per year. We identified cases using Ornge’s database and selected intubated patients meeting inclusion criteria. A standardized data extraction form was piloted and used by a single trained data extractor. The primary outcome was frequency of administration and dose adequacy of an opioid analgesic. Secondary outcomes included: choice of analgesics used (Fentanyl, Hydromorphone or Morphine), adverse events, and impact of age, sex, or reason for transfer on pain management. We present descriptive statistics.
Results: Our strategy identified 500 potential cases, of which 448 met our inclusion criteria. Among those 448 patients, 154 (34.4%) were females, 328 (73.4%) received analgesia and 211 (64.3%) received more than one dose during transport (median frequency of 2 doses, IQR = 1 to 3). The average transport time was 2h28min and repeated dosing (>1 repeat dose) occurred primarily (45.5%) in transports of over 3h. Fentanyl was the most commonly used analgesic (97.6%) and most commonly used dose was 50μg (51.8%). Adverse events occurred in 8 (2.5%) patients with 5 patients having hypotension (MAP<65). There was no significant difference in administration of analgesia based on patient’s age or sex (68.8% of females and 75.3% of male patients received analgesia). Interestingly, 30.8% of patients repatriated to home-hospital received analgesia compared to 72.3% of patients receiving analgesia for all other reasons for transfers.
Conclusions: More than 73% of intubated patients transported by Ornge received an opioid analgesic, primarily in the form of Fentanyl. We found no clinically relevant difference in the administration of analgesics based on age, sex or reason for transfer other than home repatriation.

60 SECONDS TO SURVIVAL: A DISASTER TRIAGE VIDEO GAME AND IMPROVED TRIAGE ACCURACY FOR PREHOSPITAL CARE PROVIDERS
Mark X. Cicero, Marc Auerbach, Yale School of Medicine
Background: Disaster triage training for emergency medical service (EMS) providers is not standardized. Simulation training is costly and time-consuming. In contrast, educational video games enable low-cost and less time-consuming standardized training. We hypothesized that players of the video game “60 Seconds to Survival” (60S) would have greater improvements in disaster triage accuracy compared to control subjects who were not exposed to 60S.
Methods: Participants recorded their demographics and highest EMS training, and were randomized to play 60S (intervention) or serve as controls. At Time 1, all participants completed a live school-shooting simulation in which manikins and standardized patients depicted 10 adult and pediatric victims. The intervention group then played 60S until 13 weeks later (Time 2). Players triaged 12 patients in three scenarios (school shooting, house fire, tornado) and received automated, in-game performance feedback. At Time 2, the same live simulation was conducted for all participants. Controls had no formal disaster training during the study. The main outcome was improvement in triage accuracy in live simulations from Time 1 to Time 2. Physicians and EMS providers determined expected triage level (RED/YELLOW/GREEN/BLACK) via modified Delphi method.
Results: There were 39 participants in the intervention group and 23 controls. There was no difference in gender, level of training, or years of EMS experience (median 4.0 years intervention, 4.5 years control, p=0.72) between the intervention and control groups. At Time 1, median triage accuracy was 80% [IQR 70-90%] for both groups. At Time 2 the intervention group had median triage accuracy of 90% [IQR 80-100%, p<0.001 compared to Time 1]. The control group also had median Time 2 accuracy of 90% [IQR 80-100%, p<0.001], with no difference compared to the intervention group (p=0.073).
Conclusions: Both groups demonstrated improved triage accuracy from Time 1 to Time 2. There was no significant difference between the intervention and control groups. These results may be due to small sample size, the Hawthorne effect or lack of impact of this iteration of 60S. Future directions include assessment of the game’s effect on triage accuracy with a larger, multiple site cohort and iterative development to improve 60S.

BARRIERS TO DISPATCH-ASSISTED CARDIOPULMONARY RESUSCITATION INSTRUCTION
Manali Divyesh Shah, Cherie Bartram, Kevin Irwin, Bryan McNally, Timothy Gallagher, Kimberly Vellano, Robert Swor, Oakland University William Beaumont School of Medicine
Background: DA-CPR instructions (DA-CPRi) have been shown to significantly increase rates of bystander CPR (BCPR) provision. Dispatch agencies are either Primary Public Safety Answering Points (pPSAP) or Secondary PSAPs (sPSAP). Our objectives are to describe barriers to DA-CPRi provision in a sample of dispatch agencies and to assess whether operational differences impact DA-CPRi provision.
Methods: We conducted a retrospective study of dispatch data from all OHCA calls to a convenience sample of US EMS dispatch centers from 1/1/14-12/31/15. Both pPSAPs and sPSAPs were included, and were identified through a survey of participating agencies. OHCA dispatch recordings were reviewed by agency supervisors using a web tool (Cardiac Arrest Registry to Enhance Survival (CARES) dispatch registry). Temporal elements, operational and logistical barriers to DA-CPR instruction, and text comments were recorded (multiple dispatcher comments per case). Cases were excluded if CPR was in progress, caller was not with the patient, or OHCA was in a healthcare facility, nursing home, or post- EMS arrival. Descriptive data are reported. Results: We identified 2391 cases from 28 dispatch agencies in 9 states with data for review. Population served was 10.3 million. Twenty agencies were pPSAPs (N=1316 calls), 8 were sPSAPs (N=1077 calls). There were 912 calls with barriers to DA-CPRi. Barriers to DA-CPRi were more commonly cited for sPSAPs than pPSAPs, (564, 55.4% vs. 348, 28.3%), with the most common barrier being a failure to transfer the caller from a pPSAP to sPSAP in 263/1077 (24.4%) calls. Other barriers were the inability of the caller to move the patient (272, 11.4%), caller too distraught (142, 5.9%), caller left phone (96, 4.0%), and did not differ by PSAP type. Data was recorded on dispatcher recognition of cardiac arrest in 1678 cases, of these, 1209 (72.4%) callers received DA-CPRi. CPR was initiated after instruction in 881 (72.9%) of cases.
Conclusions: In this convenience sample of dispatch agencies, the provision of DA-CPRi resulted in a high rate of CPR provision. Inability to move the patient was a major reason for not performing CPR. Improving the transfer of calls from pPSAPs to sPSAPs for DA-CPRi may have a substantial impact on the provision of BCPR.

PREDICTORS OF RESUSCITATION SUCCESS PRIOR TO EMS ARRIVAL IN OUT-OF-HOSPITAL CARDIAC ARREST PATIENTS TREATED WITH A PUBLIC ACCESS AED
Annemarie Silver, Hidetada Fukushima, Jeffrey Gould, Kristopher Edgell, David Appleby, Taku Iawami, Margaret Mullins, Robyn McDannold, Bentley Bobrow, ZOLL Medical
Background: In some instances, out-of-hospital cardiac arrest (OHCA) patients receiving bystander CPR and/or defibrillation by a public access AED (PAD) are successfully resuscitated prior to EMS arrival. The purpose of this study was to describe the factors that predict restoration of spontaneous breathing prior to EMS arrival in OHCA patients treated with a PAD.
Methods: A retrospective analysis was performed on data collected through a web-based AED management system (PlusTracTM, En Pro Inc.) from US customers between 2008 and 2015. Following each AED use, customers completed an anonymous online survey to describe characteristics and estimate times of resuscitation efforts. Multivariate logistic regression was used to determine the independent predictors of restored spontaneous breathing prior to EMS arrival. Results: Out of 576 reported AED uses, 217 patients (56 ± 15 yrs of age, 89% male) received at least one shock and were considered in confirmed OHCA. Of these, 153 (71%) suffered a witnessed arrest. Location of event was categorized as Retail (19%), Hotel/Resort (11%), Business/Corporation (25%), Government Organization (7%), Fitness Center (35%), or Nursing Home (4%). Bystander CPR was performed by a medical professional or AED-trained individual for 63% of patients. Time to AED retrieval was 2 min (IQR 1,5) and patients received 1 shock (IQR 1, 2). Spontaneous breathing was restored prior to EMS arrival for 80 patients (37%). Multivariate logistic regression revealed that witnessed arrest was a positive predictor of restoration of breathing [OR 4.80 (95% CI, 2.00-11.5)] whereas time to retrieve an AED [0.87(0.77-0.98)], total number of shocks [0.40 (0.23-0.69)], and arrest in a hotel [0.17 (0.03-0.92)] or nursing home (predicted failure perfectly) were negative predictors.
Conclusions: A significant proportion of OHCA patients that receive a shock from a PAD regain spontaneous breathing prior to EMS arrival. Time to AED retrieval, number of shocks delivered, witnessed arrest, and location of arrest are significant predictors of restored breathing.

TERMINATION OR WITHHOLDING OF RESUSCITATION IN TRAUMATIC CARDIAC ARREST IN STATEWIDE TREATMENT PROTOCOLS
Christie L. Fritz, Edward Ullman, David Schoenfeld, Beth Israel Deaconess Medical Center, Harvard Medical School
Background: The National Association of EMS Physicians (NAEMSP) issued a joint policy statement with the American Society of Surgeons: Committee on Trauma (ACS-COT) in March 2003 titled “Guidelines for withholding or termination of resuscitation in Prehospital Traumatic Cardiopulmonary Arrest.” This statement strongly advocates for the termination of resuscitation (TOR) or withholding of resuscitation in traumatic cardiopulmonary arrest in the prehospital setting. Traumatic cardiopulmonary arrest has an extremely poor prognosis, particularly in blunt trauma. The majority of states issue statewide treatment protocols (STPs) that are mandatory or serve as a guide for medical directors. The purpose of this investigation is to describe the extent to which STPs incorporate the policy statement on traumatic cardiac arrest into their protocols.
Methods: Standardized review of all STPs for termination of resuscitation protocols. Revision was also captured.
Results: Thirty out of fifty states issue STPs. Of those thirty states, thirteen have no termination of resuscitation protocol for traumatic cardiac arrest. Of the seventeen remaining states, 4 only allow termination of resuscitation in blunt traumatic arrest and 8 allow termination in both penetrating and blunt cardiac arrest with no pulse or organized electrical activity. The 5 remaining states allow for termination of resuscitation only in conjunction with online medical control.
Conclusions: Prehospital care is increasingly driven by evidence-based practices, and reinforced by professional policy statements from organizations such as NAEMSP and ACS-COT. Despite this, there is still variation in implementation. This leads to inconsistent care and transport of unsalvageable and high utilization traumatic arrest patients. STPs also have considerable variability in regards to TOR in blunt versus penetrating traumatic cardiac arrest patients, as well as duration and appropriateness of resuscitation. There is variability in the requirement of online medical control for termination, as only 12 states allow for termination or withholding of resuscitation as a standing order. As every State included has protocol revisions since the policy statement, revision cycles are unlikely a contributing factor. Further studies to better understand attitudes toward futile resuscitation as well as barriers to termination of resuscitation are needed.

FIRST 100 CASES OF BLS FIRST RESPONDER-ADMINISTERED NALOXONE IN A STATEWIDE EMS SYSTEM
Joshua Mastenbrook, James Markman, Tyler Koedam, William Fales, Western Michigan University School of Medicine
Background: In October 2015, a midwestern state law mandated that all BLS first responder (FR) agencies be trained and equipped to administer naloxone to suspected opioid-overdose patients. Although well intentioned, several questions have arisen: does naloxone use displace emphasis on early positive pressure ventilation (PPV), is naloxone being appropriately administered, and does the mandate affect patient outcome. The purpose of this study is to evaluate the first 100 cases of FR administered naloxone (FR-naloxone) for administration appropriateness and state protocol adherence which calls for intranasal naloxone after PPV initiation and when ALS is delayed by >5 minutes.
Methods: A retrospective chart review was performed utilizing the statewide EMS information system, filtering on “naloxone” as a medication administered. Beginning with the implementation date of 10/15/2015, the first one-hundred first-responder naloxone administrations were reviewed by a three-investigator panel. Data were abstracted from each case, including initial impression of mental and respiratory status, airway interventions performed, known history of opioid abuse, and timing of ALS arrival. Data were analysed using standard descriptive statistics.
Results: Of the first 100 patients, 71% were male, and the median age was 35.5 (18-89) years. Seventy-five percent of naloxone administrations were provided to patients with pulses and inadequate respirations. PPVs were given prior to naloxone in 51.3% of patients with inadequate respirations. History of prior drug use was reported by the FR in 66% of cases. FRs initiated PPV in 39.4% of known drug abuse patients versus 61.7% in those without known abuse (p=0.0337). ALS reportedly arrived within 5 minutes of FR arrival in 38% of cases, >5 minutes in 17% of cases, and in 57% of cases there was no reference to ALS response time.
Conclusions: Adherence to the state naloxone protocol appears poor among FRs. Based on initial patient evaluation, the majority of patients given naloxone had an inadequate respiratory effort. Pre-naloxone PPV was frequently not reported in patients with decreased respirations. A history of known drug abuse was associated with less frequent PPV. ALS arrived within 5 minutes of FR-naloxone in at least one-third of patients.

PREHOSPITAL TRIAGE OF ACUTE ISCHEMIC STROKE PATIENTS TO AN INTRAVENOUS TPA-CAPABLE VERSUS ENDOVASCULAR-CAPABLE HOSPITAL
Justin Lee Benoit, Pooja Khatri, Opeolu M. Adeoye, Joseph P. Broderick, Jason T. McMullan, Jan F. Scheitz, Achala S. Vagal, Mark H. Eckman, University of Cincinnati
Background: American Stroke Association guidelines for prehospital acute ischemic stroke recommend against bypassing an intravenous tPA-capable hospital (ICH) if additional transportation time to an endovascular-capable hospital (ECH) is greater than 15-20 minutes. However, it is unknown when the benefit of potential endovascular therapy at an ECH outweighs the harm from delaying thrombolytic therapyatacloserICH,especiallysincelargevesselocclusion(LVO)statusisinitiallyunknown. Our objective was to calculate the optimal prehospital triage strategy for acute ischemic stroke patients.
Methods: A decision analytic model was constructed using a comprehensive literature review and interventional trial data containing time-dependent modified Rankin Scale distributions. Base case was a 69-year-old patient triaged by Emergency Medical Services (EMS) 110 minutes after stroke onset. Base case triage choices were (1) transport to the closest ICH (12 minutes away), (2) transport to the ECH (60 minutes away) bypassing the ICH, or (3) apply the Cincinnati Stroke Triage Assessment Tool (C-STAT) and transport to the ECH if positive for LVO. Sensitivity analyses were conducted to determine the impact of onset to EMS triage interval, ICH and ECH transportation times, probability of LVO, and alternative prehospital triage tools. Outcomes were assessed using quality-adjusted life years (QALYs).
Results: In the base case, utilizing the C-STAT yielded the highest expected utility (8.48 QALYs). Sensitivity analyses demonstrated direct ECH transport was superior to ICH transport until the ECH was >56 minutes away (44 additional minutes due to ICH bypass). ICH transport was superior when the onset to EMS triage interval was long, while ECH transport was optimal with shorter intervals. As the probability of LVO increased, ECH transport was optimal at longer onset to EMS triage intervals. The optimal triage strategy was highly dependent on specific interactions between the onset to EMS triage interval, ICH transportation time, and ECH transportation time.
Conclusions: Acute ischemic stroke guidelines for ICH bypass time should be increased. No absolute time difference between ECH and ICH transportation is capable of optimizing EMS triage for all patients. Prehospital triage tools for LVO have utility in further optimizing triage.

DIRECT VERSUS INDIRECT LARYNGOSCOPY FOR PREHOSPITAL INTUBATION: A SYSTEMATIC REVIEW AND META-ANALYSIS
Paul Brian Savino, Scott Reichelderfer, Mary P. Mercer, Karl A. Sporer, University of California, San Francisco
Background: The use of video and other forms of indirect laryngoscopy (IDL) for intubation has gained recent popularity, especially among acute care areas of the hospital. In the prehospital setting, utilization of IDL has also seen an increase, but widespread implementation may be associated with significant cost and challenges. To our knowledge, a systematic review and meta-analysis of the literature has not yet been performed assessing the use of IDL compared to direct laryngoscopy (DL) in the prehospital setting. Our objective was to determine whether the use of IDL provides an improvement in overall and first-pass endotracheal intubation success rates in the prehospital setting when compared to DL.
Methods: A systematic search was performed of the Pubmed, Embase, and SCOPUS databases through May 2016 using search terms related to the setting and procedure. As part of the inclusion criteria, studies had to: use living human subjects in the prehospital setting, be published in English, and include data that allowed for the calculation of overall and/or first-pass success/failure rates. Data were extracted separately by two reviewers and a meta-analysis was then performed using a random effects model. Begg’s test and funnel plots were used to assess for publication bias.
Results: 472 articles were collected, of which 8 eligible studies were identified. When stratified by provider type the pooled relative risk (RR) estimate for overall intubation failure using IDL was 20.77 (95% CI 5.46, 79.05) and 0.44 (95% CI 0.19, 1.00) among studies using physicians and non- physicians, respectively. For first-pass intubation failure the pooled RR estimate for using IDL was 2.98 (95% CI 1.93, 4.62) and 0.56 (95% CI 0.36, 0.85) among studies using physicians and non-physicians, respectively. There was moderate to significant heterogeneity noted between studies. There was no evidence of publication bias.
Conclusions: Among non-physician providers, IDL may provide benefit over DL in the prehospital setting. However, among physician providers with significant DL experience, introducing IDL may not improve overall or first-pass success rates and may be associated with decreased performance. Studies were heterogenous and further research is needed on the use of prehospital IDL.

USING DELAYED SEQUENCE INTUBATION TO PREVENT PERI-INTUBATION HYPOXIA
Jeffrey L. Jarvis, John Gonzales, Danny Johns, Christina Gutierrez, Williamson County EMS
Background: Rapid Sequence Intubation (RSI) is a commonly used EMS technique. Safely using this technique depends on adequate pre-oxygenation and de-nitrogenation. Delayed Sequence Intubation (DSI) is a process that involves a delay between the administration of ketamine and a paralytic to allow appropriate pre-oxygenation in the presence of hypoxic agitation. Our EMS system identified peri- intubation hypoxia as an area for improvement and implemented a DSI strategy as part of our CQI efforts to address these occurrences.
Methods: We identified charts of non-cardiac arrest patients in our mid-sized suburban EMS system who were intubated between October 2013 and July 2016. We replaced our RSI process with a DSI protocol in January 2016. Using a before-and-after approach, we analyzed the differences between these groups in several metrics: lowest peri-intubation SpO2, percentage of moderate (80-89%) and severe (<79%) hypoxic episodes. These groups were compared with a two-tailed t-test. Our DSI protocol prohibited intubation unless a pre-intubation SpO2 >93% was maintained for at least three minutes. To assist with this, each ambulance was equipped with a stopwatch and mandatory checklist. The protocol required the use of high-flow nasal cannula oxygen, a BVM with PEEP and the use of ketamine for sedation.
Results: The RSI and DSI groups had 73 and 30 patients respectively. The RSI vs. DSI average lowest peri-intubation SpO2 was 82.3% vs. 97.9% (p = 0.0006), the percentage of cases with moderate hypoxia was 39.7% vs. 0.0% (p < 0.0001) and the percentage of cases with severe hypoxic episodes was 65.8% vs. 0.0% (p < 0.0001).
Conclusions: In this CQI process involving non-cardiac arrest intubations in this suburban system, we noted an increase in the lowest peri-intubation SpO2 and an elimination of hypoxic episodes. This was not a controlled or randomized trial, it was a before-and-after analysis of a CQI effort, and had different size groups. Although these groups seem similar, it is possible that there was a difference in the DSI group that biased the results against RSI. A randomized, controlled trial is needed to confirm the effect seen in this case series.

ACTIVE INTRATHORACIC PRESSURE REGULATION IMPROVES MEAN ARTERIAL PRESSURE DURING CARDIOPULMONARY RESUSCITATION
Nathan Burkhart, William Fales, Charles Lick, Kevin Franklin, Anja Metzger, Keith Lurie, Nicolas Segal, ZOLL Medical
Background: Active intrathoracic pressure regulation (a-IPR) is a novel, noninvasive therapy that actively generates a continuous negative intrathoracic pressure of -9mmHg between positive pressure ventilations. A-IPR has been shown to increase circulation and survival in animal models of resuscitation, brain injury, and cardiac arrest. We compared the data from two sites utilizing a-IPR during CPR and evaluated mean arterial pressure (MAP), end-tidal carbon dioxide (EtCO2), and rate of return of spontaneous circulation (ROSC) versus patients not receiving a-IPR at one of the sites.
Methods: All patients were treated initially with manual CPR. When a-IPR-trained medics arrived on scene, the a-IPR device (CirQLATOR, ZOLL Medical, Minneapolis, MN) was attached to the patient’s advanced airway for the intervention group and CPR was continued. Control patients received manual CPR without the addition of a-IPR. MAP and EtCO2 were measured by the research team without interfering with resuscitation efforts. Values are expressed as mean ± SD and were compared using unpaired t-tests, difference in ROSC rate was compared using a Fisher’s exact test. P-values of <0.05 were considered statistically significant.
Results: Data were analyzed for 13 patients who received a-IPR and 8 patients in the control group who received CPR without a-IPR. During CPR with a-IPR therapy, average MAP was 63.6 ± 30.7 mmHg compared to 40.0 ± 13.6 mmHg for patients in the control group (p=0.03). End-tidal was also different between the two groups, with an average level of 47 ± 30 mmHg for a-IPR patients versus 23 ± 16 mmHg in the control group (p=0.06). Of the patients who received a-IPR, 5/13 (38.5%) ultimately had ROSC while only 1/8 (13.0%) of those who did not receive a-IPR achieved ROSC.
Conclusions: MAP was significantly higher in patients receiving a-IPR therapy. EtCO2 also trended higher and there was a three-fold increase in rate of ROSC with a-IPR. These findings demonstrate that circulation and potential for ROSC during CPR may be improved by the addition of a-IPR, which generates a negative end expiratory pressure between each breath.

A DESCRIPTIVE ANALYSIS OF MEDICAL CARE PROVIDED BY US LAW ENFORCEMENT PERSONNEL PRIOR TO EMS ARRIVAL
S. Brent Core, Christine M. Lohse, Matthew D. Sztajnkrycer, Department of Emergency Medicine, Mayo Clinic
Background: Law enforcement is increasingly viewed as a key component in the out-of-hospital chain of survival, with expanded roles in cardiac arrest, narcotic overdose, and traumatic bleeding. Little is known about the nature of care provided by law enforcement prior to the arrival of emergency medical services (EMS) assets. The purpose of the current study was to perform a descriptive analysis of events reported to a national EMS database.
Methods: Descriptive analysis of the 2014 National Emergency Medical Services Information System (NEMSIS) public release research data set, containing EMS emergency response data from 41 states. Code E09_02 1200 specifically identifies care provided by law enforcement prior to EMS arrival.
Results: A total of 25,835,729 unique events were reported. Of events in which pre-arrival care was documented, 2.0% received prior aid by law enforcement. Patients receiving law enforcement care prior to EMS arrival were more likely to be younger (52.8 ± 23.3 vs. 58.7 ± 23.3, p < 0.01), male (54.8% vs. 46.7%, p < 0.01), and white (80.3% vs. 77.5%, p < 0.01). BLS EMS response was twice as likely in patients receiving prior aid by law enforcement, no difference in ALS response was noted. Multiple patient incidents were 5 times more likely with prior aid by law enforcement. Compared with prior aid by other services, law enforcement pre-arrival care was more likely with motor vehicle accidents (1.4x), firearm assaults (2.7x), knife assaults (2.4x), blunt assaults (1.6x), and drug overdoses (3.3x), and less likely at falls (0.55x). Cardiac arrest was significantly more common in patients receiving prior aid by law enforcement (16.5% vs. 2.6%). Naloxone was administered in 1685 cases. Hemorrhage control, including tourniquet use and hemostatic agents, was more common in the law enforcement prior aid group.
Conclusions: Where noted, law enforcement pre-arrival care occurs in 2% of EMS patient encounters. The majority of cases involve cardiac arrest, motor vehicle accidents, and assaults. Better understanding of the nature of law enforcement care is required in order to identify potential barriers to care and to develop appropriate training and policy recommendations.

CHANGES IN CHEST COMPLIANCE OVER TIME AS MEASURED THROUGH CHANGES IN ANTERIOR POSTERIOR CHEST HEIGHT DURING CARDIOPULMONARY RESUSCITATION IN HUMAN CADAVERS
Nicolas Segal, Aaron E. Robinson, Paul S. Berger, Michael C. Lick, Andrew A. Ashton, Angela M. McArthur, Anja Metzger, Department of Emergency Medicine, University of Minnesota Medical School
Background: Chest compliance plays a fundamental role in the generation of circulation during conventional (C) CPR and active compression decompression (ACD) CPR. To study potential changes in chest compliance over time, anterior posterior (AP) chest height measurements were performed on fresh (never frozen) human cadavers during ACD-CPR before and after 5 minutes of automated C-CPR. We tested the hypothesis that after 5 minutes of C-CPR chest compliance would be significantly increased. We also wanted to measure, for the first time, the resulting AP changes with various levels of decompression force (lift).
Methods: Static compression (30, 40, and 50 kg) and decompression forces (- 10, -15) were applied with a manual ACD-CPR device (ResQPUMP, ZOLL, Chelmsford, MA) on 9 cadavers. Lateral chest x-rays were obtained with multiple reference markers to assess changes in AP distance before and after 5 minutes of automated C-CPR.
Results: In 6 male and 3 female cadavers (75±17 years) with a height of 174±11 cm and a weight of 66±16 kg, the initial mean AP distance measured 22.5±2.1 cm. Changes (mean ± SD) in the AP distance (cm) during the applied forces were 2.1±1.2 for a compression force of 30 kg, 2.9±1.3 for 40 kg, 4.3±1.0 for 50 kg, 1.0±0.8 for a decompression force of – 10 kg and 1.8±0.6 for -15 kg. After 5 minutes of automated C-CPR, the AP distance measured 22.3±1.6 cm and AP excursion distances were significantly greater (p<0.05). AP distance (cm) increased 3.7±1.4 for a compression force of 30 kg, 4.9±1.6 for 40 kg, 6.3±1.9 for 50 kg, 2.3±0.9 for a -10 kg of lift and 2.7±1.1 for -15 kg of lift.
Conclusions: These data demonstrate chest compliance increases significantly over time as demonstrated by the significant increase in the measured AP distance after 5 minutes of C- CPR. In addition, it requires at least 10 kg of lift force to increase the AP distance 1 cm initially (as provided by the suction cup during active decompression) and greater decompression forces result in additional thoracic lift. These findings suggest that adjustments in compression and decompression forces may be needed to optimize CPR over time.

ASSOCIATION OF EMERGENCY MEDICAL SERVICE ACTIVATION TO EMERGENT CORONARY ANGIOGRAPHY TIME WITH SURVIVAL AND NEUROLOGICAL OUTCOME AFTER OUT-OF-HOSPITAL CARDIAC ARREST
Joo Jeong, Sang Do Shin, Young Sun Ro, Seoul National University Hospital
Background: It is uncertain whether earlier emergency coronary angiography (CAG) has clinical benefit among patients with out-of-hospital cardiac arrest (OHCA). The purpose of this study was to evaluate the effect of emergency medical service (EMS) activation to emergency CAG time in outcomes of OHCA. Methods: A population-based observational study was conducted on OHCA of adult, witnessed, and cardiac etiology in South Korea who survived to admission between 2013 and 2014. All enrolled patients were received emergent CAG in same day of OHCA. Multivariable logistic regression analysis was performed to assess the associations between EMS call to needle time (C2N time) and outcomes (survival to discharge and favorable neurological outcome). Confounders were adjusted for calculating odds ratio (OR) and 95% confidence interval (CI). Results: A total of 418 OHCAs were analyzed. Faster than 90 minutes of C2N time group (group 1) had 91 patients. From 90 minutes to 120 minutes of C2N time group (group 2) had 139 patients. Later than 120 minutes of C2N time group (group 3) had 188 patients. Survival discharge rate was 82.4%, 75.5%, 63.8%, respectively (p value < 0.01). Good neurological outcome rate was 67.0%, 61.9%, 47.3%, respectively (p value < 0.01). Adjusted ORs (95% CIs) for survival compared with group 3 were 2.59 (95% CI 1.27-5.31) for group 1, 1.71 (95% CI 0.97- 3.02) for group 2, respectively. Adjusted ORs (95% CIs) for good neurological outcome compared with group 3 were 2.16 (95% CI 1.12-4.17) for group 1, 1.93 (95% CI 1.12-3.33) for group 2, respectively.
Conclusions: This study shows some evidence that earlier emergent CAG in witnessed OHCA has a probable benefit of survival and good neurology.

A COMPARISON OF DISTANCE FROM THE HOSPITAL AND OUTCOMES IN PATIENTS WITH SEVERE SEPSIS/SEPTIC SHOCK
Jessica Gershen, Desmond Fitzpatrick, Jason Jones, Matthew Tice, Christine Van Dillen, University of Florida Health; Shands, EMS fellowship
Background: Sepsis, a common ED presentation, is a leading cause of death and disability. Studies show that early goal directed therapy (EGDT), including timely administration of fluids and antibiotics, reduces sepsis morbidity and mortality. Most sepsis patients arrive at the hospital via EMS from varying distances away from the hospital. The purpose of this study is to determine whether increased EMS transport times and greater distance traveled is associated with worse outcomes in sepsis. We hypothesize that patients from more distant areas experience worse outcomes, including increased hospital length of stay, end organ damage, or likelihood of mortality.
Methods: Data included 2,336 patients admitted to the hospital with a diagnosis of severe sepsis or septic shock between 1/1/2009 through 9/1/2015. Only patients who arrived via ambulance or air transport were included. Overall mortality, length of hospital stay, days on dialysis and peri-dialysis, those requiring continuous renal replacement therapy (CRRT), and days on a ventilator were compared to the distance traveled to the hospital.
Results: Of the study patients, 201 (8.59%) required hemodialysis, 12 (0.51%) required peri- dialysis, 116 (4.96%) required CRRT, and 1097 (46.90%) required mechanical ventilation. Mean distance to the hospital was 43.7 miles, N=1240 30 miles. Increasing distance demonstrated statistically significant associations with inpatient length of stay (p < 0.0001, Spearman’s correlation r=0.092), ventilator days (p < 0.0001, r=0.138), and use of CRRT (p = 0.0020, r= 0.064). The relationship between distance traveled and hemodialysis (r=0.0014) or peri-dialysis days (r=0.014) was not significant. Patients more than 30 miles from the hospital more often required mechanical ventilation (54% vs. 43%, p <.0001, OR of 1.57, CI 1.32 – 1.85). Similarly, these patients had a greater likelihood of requiring CRRT (7% vs. 4%, p = 0.0056, OR of 1.69, CI 1.16 – 2.45).
Conclusions: The results suggest that patients with longer EMS transport times suffer more complications from sepsis. We acknowledge that other factors affect patient outcomes, such as access to health care. More research needs to be done in this area including if EGDT started with EMS in the prehospital setting could result in more positive outcomes.

A NEW ERA IN SPINAL IMMOBILIZATION?
Brian Hehl, Beth Langley, Erin Wirths, Matt Wells, Tripp Winslow, Cape Fear Valley Health Systems
Background: It has long been the standard of care for trauma patients to receive treatment with spinal immobilization via long spine board and a cervical collar, head blocks, and some variation of straps to secure the patient to the board. Many prehospital agencies have taken steps to reduce or eliminate the utilization of the long spine board because there is no additional benefit to patients that lacked symptoms of neurological deficit. A new spinal immobilization protocol was developed to restrict use of long spine boards to cases in which the patient demonstrated symptoms of a spinal injury without change in mental status. All other trauma patients received only a cervical collar. We hypothesized that iImplementation of the new spinal immobilization protocol will have no impact on patient outcomes and will result in a significant decrease in the utilization of long backboards.
Methods: Data was collected through electronic databases in the agency and at the receiving hospital. Trauma services provided information on every patient that had diagnosed spinal fractures, spinal cord damage, and neurological deficits. Using this data, patients were cross-referenced with a list of all patients in the same period that received treatment with long spine boards.
Results: A total of 122,326 EMS calls were logged In the two- year study period. Baseline data from the first year yielded 2182 trauma patients placed on long spine boards using the standard protocol. The new protocol was implemented the second year which resulted in 981 patients who met criteria for long spine board placement. Based upon call volume, there was a 62.5% reduction in long board use. Patients that had spinal injury without deficit and did not receive a long spine board had no change in their injury pattern. This resulted in no changes in patient outcomes in the period of study.
Conclusions: Use of the new immobilization protocol supported the hypothesis that there would be no impact on patient outcomes while significantly decreasing the use of long backboards.

A DESCRIPTIVE ANALYSIS OF PREHOSPITAL MIDAZOLAM AS A CHEMICAL RESTRAINT IN COMBATIVE PATIENTS
Lauren Leggatt, Matthew Davis, Kristine Van Aarsen, Paul Bradford, Pete Morassutti, Mason Leschysna, Division of Emergency Medicine, London Health Sciences Centre
Background: Paramedics are often required to manage violent or combative patients. In order to do so safely, chemical sedation may be required. There are a number of pharmacologic agents which may be used. However, there is a paucity of evidence as to the optimal agent. Our objective was to provide a descriptive analysis of a single base hospital’s experience with combative patients and to determine the efficacy and any adverse events associated with midazolam use in these patients.
Methods: A retrospective chart review of calls from 2 urban centers, from January 2012 to December 2015 was completed. All cases of combative patients were examined. Patients were excluded if they were 17 or younger.
Results: Of approximately 350,000 calls over the study period, there were 269 patients that were combative. Of these, 186 (69.1%) received midazolam for sedation. Multiple dose administration was required in 33.3% of patients. Depending on route of administration, the average total dose administered was 6.27mg (SD 3.98mg) intramuscular, 10.7mg (SD 4.00mg) intranasal and 4.95mg (SD 3.81mg) intravenous. Midazolam was documented as effective in treating the combativeness in 133 (71.6%), ineffective in 28 (15.1%), and not documented in 25 (13.4%) calls. Adverse events post midazolam administration, defined as: hypotension, bradypnea, bradycardia or need for airway intervention, were encountered in 3 (1.61%) calls (respiratory rate of 8, hypotension of 88/59 that responded to intravenous fluid and asymptomatic bradycardia of 59). Of the 186 calls, 71 (38.2%) had documented drug ingestion and 117 (62.9%) required police presence. There was a trend of increasing number of combative patients each year over the study period, with a significant difference in the number of combative calls requiring midazolam administration in 2012 and 2015 (50.0% vs. 72.8%, p=0.007).
Conclusions: Prehospital use of midazolam for combative patients appears to be safe, with minimal adverse events. However, midazolam was ineffective in 15.1% and a third of all patients required multiple doses, prolonging the combative period and compromising paramedic safety. Further research is underway examining this cohort’s emergency department (ED) sedation needs and any associated adverse events within 1 hour of ED arrival.

PEDIATRIC PREHOSPITAL MEDICATION DOSING ERRORS: A NATIONAL SURVEY OF PARAMEDICS
John D. Hoyle, et al. Western Michigan University Homer Stryker, MD School of Medicine
Background: Pediatric drug dosing errors occur at a high rate in the prehospital environment.
Objective: To describe paramedic training and practice regarding pediatric drug administration, exposure to pediatric drug dose errors and safety culture among paramedics and EMS agencies in a national sample.
Methods: An electronic questionnaire was sent to a random sample of 10,530 nationally certified paramedics. Descriptive statistics were calculated.
Results: There were 1,043 (9.9%) responses and 1014 paramedics met inclusion criteria. Nearly half (43.0%) were familiar with a case where EMS personnel delivered an incorrect pediatric drug dose. Over half (58.5%) believed their initial paramedic program did not include enough pediatric training. Two-thirds (66.0%) administered a pediatric drug dose within the past year. When estimating the weight of a pediatric patient, 54.2% used a length-based tape, while 35.8% asked the parent or guardian, and 2.5% relied on a smart phone application. Only 19.8% said their agency had an anonymous error-reporting system and 50.7% believed they could report an error without fear of disciplinary action. For solutions, 89.0% believed an EMS- specific Broselow-Luten Tape would be helpful, followed by drug dosing cards in milliliters (83.0%) and changing content of standardized pediatric courses to be more relevant (77.7%).
Conclusions: This national survey demonstrated a significant number of paramedics are aware of a pediatric dosing error, safety systems specific to pediatric patients are lacking and that paramedics view pediatric drug cards and eliminating drug calculations as helpful. Pediatric drug-dosing safety in the prehospital environment can be improved.

REDUCTION OF BYSTANDER TIME-TO-CHEST COMPRESSIONS USING A DISPATCHER-GUIDED CPR ALGORITHM
Marc Richard Conterato, Jan Althoff, Alexander Trembley, John Lyng, North Memorial Ambulance Service
Background: The earlier bystander compression-only CPR is initiated has a significant effect on outcome of out-of-hospital cardiac arrest (OHCA). Dispatcher-assisted CPR is known to increase rates of bystander CPR. This study evaluates the effect of a novel dispatch guided bystander CPR algorithm on the time between 911 call receipt and initiation of bystander compression-only CPR. Methods: We conducted a retrospective review of all cardiac arrests that received prearrival instructions from dispatchers in our secondary public safety answering point following implementation of our algorithm. Each case was analyzed for time between call receipt and initiation of chest compressions by bystanders. Outcome data was extracted from our CARES registry. The primary outcome was the time between call receipt and initiation of bystander chest compressions, and the secondary outcome was patient survival.
Results: A total of 85 cardiac arrests were identified in our review from 5/1/2014 to 5/1/2016. Our algorithm underwent serial revision during the study period, and each version of the algorithm covered the following number of cases V1.0 (14), V1.1 (22), V1.2 (30), V1.3 (5), V1.4 (7) and V1.5 (7). The average patient age was 58.6 years, and 65.9% were male, with no significant differences in patient age or gender between the cohorts. Seconds from call receipt to bystander compression occurred as follows: Algorithm V1.0 (137), V1.1 (181), V1.2 (173), V1.3 (177), V1.4 (203), and V1.5 (151). Our algorithm shortened the time to bystander compressions by 59 seconds compared to our pre- algorithm baseline of 210 seconds. Our pre-algorithm rate of ROSC was 0.7% and 31.8% for pre and post EMS arrival. At the end of our study period, our ROSC rates were 1.2% and 27.8% pre and post-EMS arrival. Survival from cardiac arrest before algorithm implementation was 8.0% (n= 31) compared to a survival of 8.9% (n=55) at the end of our study period.
Conclusions: Implementation and revision of a dispatcher guided algorithm for bystander compression-only CPR achieved a shorter interval from call receipt to time of first compression, but initially appears to have had only a small impact on survival from OHCA.

FREQUENCY AND ETIOLOGY OF PEDIATRIC OUTPATIENT EMERGENCIES
Matthew Yuknis, Brian Benneyworth, Indiana University School of Medicine
Background: Office based emergencies pose a significant issue to pediatric providers. Greater than 50% of pediatric offices report seeing between one and five patients a week requiring emergent treatment. We hypothesize that outpatient emergencies in pediatric patients occur at a different frequency and are due to other etiologies than what is currently described in the medical literature.
Methods: We retrospectively extracted data from EMS charts for the city of Indianapolis for all patients less than 18 years cared for from 1/1/2012 to 12/31/2014. Probabilistic matching of pick-up locations was performed with addresses of outpatient medical facilities taken from the Indiana Physician Workforce study to isolate patients requiring emergency care in an outpatient setting. We reviewed these patients’ EMS records to categorize the types of illnesses and interventions provided in route to definitive care. Basic demographic data pertaining to the patients and practices was also collected. Finally, accuracy of EMS reporting with regard to pick-up location was determined.
Results: A total of 38,841 pediatric patients were transported by Indianapolis EMS. Of these patients, we identified 1132 based on matching criteria picked up at a medical facility. 805 of these patients were transported between hospitals or from an unidentified location, leaving 327 patients meeting criteria for analysis. Average patient age was 5.6 years. 58.1% of patients were categorized as respiratory distress with the other most common illness categories being behavioral issues including suicidal ideation (6.4%), seizures (6.1%), and syncope (5.5%). 27.2% of patients required supplemental oxygen during transport, 26.9% received albuterol, and 9.2% required IV access. Of the predetermined critical care interventions, 1.5% of patients required benzodiazepine administration, 1.5% required a fluid bolus, and 1.2% required epinephrine (IM or racemic). No patients required bag mask ventilation, an artificial airway, intraosseous access, or CPR.
Conclusions: The majority of patients requiring emergent transport from an outpatient facility presented with respiratory distress. Patients rarely required any critical care interventions. This study will better categorize pediatric outpatient emergencies and help prepare EMS for dealing with these patients. Additional analysis will help identify the frequency of these emergencies and differences in patient and practice characteristics.

KETAMINE FOR ACUTE PAIN: ONE EMS SYSYEMS FIVE-YEAR EXPERIENCE
Robert L. Dickson, Guy R. Gleisberg, Jordan L. Anderson, Wesley R. Meyer, Jared L. Cosper, Baylor College of Medicine; EMS Collaborative Research Group
Background: Treating patient’s pain within the prehospital setting ought to be a medical priority for Emergency Medical Services (EMS). First responders must quickly identify, treat and manage the pain symptom with adequate analgesics to lessen the suffering and reduce the patient’s potential of harmful short/long term effects. At present, literature regarding multi-year use of ketamine for analgesia in the out-of-hospital (OOH) setting are limited. Our objective was to determine the change in the patient’s medical response after subdissociative dose ketamine administration by paramedics treating under a pain management protocol.
Methods: A retrospective analysis was conducted on all patients receiving prehospital Ketamine by paramedics for pain in a countywide EMS system from August 2010 to March 2016. Ketamine administration indications were for analgesia of the trauma or medical patient requiring pain management, with dosing of 0.1 – 0.5 mg/kg intramuscular (IM), intravenous (IV) and intraosseous (IO) over 60 seconds. Contraindications include hypertensive crisis, seizures, elevated intracranial pressure, inability to control airway and psychotic illness. Those cases where the ketamine intervention was used for pain management were queried within the ZOLL ePCR system and abstracted. National EMS information System response to medication, route, dosing, and general demographics were recorded. Descriptive statistics were utilized to describe study characteristics and a 95% confidence interval was calculated for patient’s medical response. Results: A total of 115 patients met the study inclusion criteria. Among this group, a clinical outcome improvement of medical response was observed after the first dose in 86% (95%CI 79% – 91%) of patients, with 14% unchanged, and 0% cases had a worsening response. The first ketamine, mean dose was 74mg (range 4 – 475 mg), administration routes were 99(86%) IV, 25(22%) IM, and 2(2%) IO. The mean age was 49 years (range 12 – 92 years), 67(58%) patients were male with a mean weight of 87kg. Moreover, no patient adverse events were reported.
Conclusions: In this system’s population ketamine analgesia used by EMS was effective and safe for pain management in the prehospital care setting. Prospective OOH patient studies utilizing ketamine are needed to identify and define a comprehensive pain treatment continuum.

TELECOMMUNICATOR BREATHING ASSESSMENT TECHNIQUES IN OUT-OF-HOSPITAL CARDIAC ARREST
Blake Langlais, John Sutter, Katarina Bohm, Micah Panczyk, Chengcheng Hu, Daniel W. Spaite, Bentley J. Bobrow, Bureau of EMS and Trauma System, Arizona Department of Health Services
Background: Telephone CPR (TCPR) increases survival from out-of-hospital cardiac arrest (OHCA). AHA recommends a two question approach for telecommunicators to identify OHCA: Is the patient (1) conscious and (2) breathing normally? Telecommunicators use breathing assessment techniques (BATs) to identify or confirm breathing status. BATs may delay the TCPR process. We sought to determine durations of BATs and frequency of unneeded BATs.
Methods: Retrospective, observational study of telecommunicator breathing inquiries using defined BATs. BATs were categorized as either initial (IA) or secondary (SA) assessments in OHCA audio recordings received at a regional 911 center in Arizona between 2/1/2011 and 3/15/2012. The center had not adopted the two-question approach. SAs were defined as any assessment after IA. Unneeded BATs were defined as any occurring after the call evaluator recognized the patient was not breathing normally. BAT type and duration were collected in a structured format. Duration of unique BATs within cases were estimated using Kaplan-Meier method.
Results: After exclusions, 591 recordings were evaluated. Telecommunicators made IAs in 443 (75.0%). Of these cases, 239 (54.0%) had at least one SA. The following are median elapsed times in seconds for six BATs telecommunicators asked or instructed callers to perform: (1) is patient breathing? = 5 [n=399, 92 censored, 95% CI=(4,6)], (2) watch for rise and fall of chest = 10 [n=102, 18 censored, 95% CI=(8, 11)], (3) listen and feel for signs of breathing = 15 [n=61, 9 censored, 95% CI=(12, 20)], (4) head-tilt-chin-lift = 21 [n=57, 7 censored, 95% CI=(17, 25)], (5) count patient’s breathing rate = 21 [n=53, 10 censored, 95% CI=(14,26)], (6) is there agonal breathing? = 7 [n=19, 3 censored, 95% CI=(4, 9)]. Among all 358 SAs employed by telecommunicators, 72.6% (260) were employed after the call evaluator recognized the patient was not breathing normally. BATs ranged from a median of 5 s to 21 s in duration.
Conclusions: During OHCA, breathing assessments can cause significant time delays in recognition of cardiac arrest and initiation of chest compressions.

ADEQUACY OF EMS SYSTEMS OF CARE PROTOCOLS FOR ADULTS WITH OHCA, STEMI & STROKE IN RURAL AND NON-RURAL COUNTIES IN OREGON: A STRUCTURED REVIEW
Paul S. Rostykus, Oregon Health and Science University
Background: EMS protocols for prehospital care vary greatly in the US. For Out-of-Hospital Cardiac Arrest (OHCA), STEMI and stroke systems of care, well-developed evidence-based consensus statements and guidelines exist regarding prehospital care. Our objective was to examine rural and non-rural Oregon licensed ambulance protocols for OHCA, STEMI & stroke elements of care.
Methods: NASEMSO evidence-based guideline and AHA consensus statement recommended care elements were selected for abstraction. For OHCA there were 27 elements abstracted, 20 for STEMI, and 10 for stroke. Oregon licensed ambulance treatment protocols received for OHCA, STEMI and stroke were reviewed in a structured fashion using a piloted data collection tool. Rural EMS agencies were defined as those in counties classified by the State of Oregon as Rural or Frontier. Descriptive statistics and chi-square were used to summarize the findings.
Results: Protocols were received from 95 Oregon ambulance agencies from 34 of Oregon’s 36 counties. There were 31 different protocols used in the 60 rural agencies and 9 different protocols used in the 35 non-rural agencies. At least 75% of the protocols were dated within the prior 4 years, more so in protocols from non-rural agencies compared to rural agencies. Recommended care elements were mentioned or followed variably ranging from 0-100% of the time. Data elements present in all OHCA protocols included initial vasopressor and advanced airway, in all STEMI protocols 12 lead ECG, IV access, nitroglycerin and analgesic administration, and in stroke protocols time of symptom onset. The most common elements lacking from the protocols were specific event times and tidal volumes for ventilations. EMS protocol data element compliance or presence was higher in non-rural agencies compared to rural agencies for OHCA and stroke (p < 0.05), but not different for STEMI (p = 0.053).
Conclusions: In Oregon, ambulance protocols for OHCA, STEMI and stroke systems of care were quite varied. Protocols from rural agencies tended to lack elements compared to those from non-rural agencies. Further studies may be of benefit to determine the optimal EMS treatment protocols and how to best implement them throughout the state, to cover both rural and non-rural EMS agencies.

USE OF THE LUCAS-2 DEVICE IN HELICOPTER EMS: IS IT SAFE?
Ralph J. Frascone, William Heegaard, Cheryl Pasquarella, Joseph R. Pasquarella, Adam W. Mayer, Sandi S. Wewerka, Regions Hospital
Background: Performing CPR in EMS helicopters is difficult. Manual CPR presents a safety hazard because of the necessity to remove safety belts. Lack of a sterile cockpit (everyone and everything secured) during takeoff and landing, violates FAA regulations (FAR 135.100). The purpose of this trial was to evaluate survival before, and following, deployment of the LUCAS device in a helicopter EMS agency to assess its acceptability as a substitute for manual compressions.
Methods: Patient care records from June 2009 – September 2013 (pre-LUCAS) and October 2012 – January 2016 (post-LUCAS) were identified. Demographics (age, sex), initial cardiac rhythm, patient outcome (survival to ED admission) and duration of in-flight CPR were abstracted. Basic comparisons of continuous variables were completed using unadjusted t-tests and unadjusted Chi-squared for categorical variables. Stratified ANOVAs were calculated to analyze the group differences in CPR time based on patient survival status prior to HEMS leaving the emergency department.
Results: 109 runs (43 pre-LUCAS and 65 post-LUCAS) were abstracted. Gender (pre, 32% male vs. post, 68% male, p = 0.015) and initial rhythm (pre, 24% asystole vs. post, 76% asystole, p = 0.009) were significantly different between the study periods. There were no overall differences in age or survival (pre = 77% vs. post = 67%, p=0.34) between the study periods. CPR was performed for a significantly longer period of time with the LUCAS compared to standard CPR (pre mean, 8.08 min (SD 7.58) vs. post mean, 14.66 min (SD 11.84), p = 0.0012). Despite the fact that CPR was performed for a longer period of time with the LUCAS device, the percent of patients reported alive when HEMS left the ED was not different between the study periods (pre: F(2,44) = 2.01, p=0.15, post: F(2,59) = 0.84, p=0.44).
Conclusions: There was no difference in survival to ED admission between the two compression methodologies. It appears the LUCAS device is clinically safe, and safer for the crew. In addition, the device allows the agency to be compliant with FAA regulations in the event of inflight cardiac arrests. This study is limited by its small sample size.

A RETROSPECTIVE COMPARISON OF THE KING LARYNGEAL TUBE AND IGEL AIRWAYS IN OUT-OF- HOSPITAL CARDIAC ARREST: INITIAL EXPERIENCE IN A SINGLE EMS SYSTEM
Kevin Patel, Tyler Vaughn, Colleen MacCallum, William Fales, Western Michigan University, Homer Stryker MD School of Medicine
Background: Supraglottic airways have been used for first-line airway management in out-of-hospital cardiac arrest (OHCA), offering possible advantages of high success rates and potential for insertion without chest compression interruption. Few studies have compared effectiveness of supraglottic airways in OHCA. The study objective was to compare the King-LTS-D and I-gel airways in OHCA.
Methods: This retrospective analysis of data obtained from a countywide EMS system (population ~250,000) compared the King-LTS-D and I-gel airways, used first-line in OHCA. This system changed from the King-LTS-D to the I-gel on 10/1/2015. Data for the I-gel were analyzed for a 6-month period beginning 10/1/2015 and compared to the King-LTS-D for the same 6-month period beginning 10/1/2014. Data were obtained through the state EMS information system. Individual records were manually reviewed. Cases were included when either device was used as the first-line airway for OHCA, and excluded if used in non-OHCA. The primary outcome was first-pass success rate. Secondary outcomes were final success, return of spontaneous circulation (ROSC), and neurologically favorable discharge (cerebral performance categories 1 or 2) rates. Waveform capnography (ETCO2) was primarily used to confirm success. A non-inferiority chi-square analysis was performed with a significance of 0.05.
Results: There were 88 King-LTS-D and 113 I-gel uses during the respective study periods. King-LTS-D and I-gel patients were similar in age (65.4 vs. 62.7 years, p=0.146) and gender (53.4% vs. 60.2% male, p=0.926). First-pass success rate was 81.8% for the King-LTS-D vs. 92.0% for the I-gel (p=0.018). Success confirmed by ETCO2 was comparable between both groups (90.9% vs. 93.8%, p=0.438). The first-pass and final success rates with the I-gel were respectively estimated to be 10.2% (95% CI: 0.6, 19.7) and 11.0% (1.7%, 20.4%) higher than the King-LTS-D. Non-inferiority was not demonstrated between the King-LTS-D and the I-gel for ROSC (38.6% vs. 37.3%,p=0.441) and neurologically favorable hospital discharge rate (7.0% vs. 12.0%,p=0.838).
Conclusions: In OHCA, the I-gel airway was found to be non- inferior to the King-LT airway for first-pass and final success rates. Non-inferiority was not demonstrated for ROSC or neurologically favorable hospital discharge rates. Our results support continued study of the I-gel airway in OHCA.

PREHOSPITAL RECURRENT VENTRICULAR FIBRILLATION CARDIAC ARRESTS TREATED WITH BETA- ADRENERGIC BLOCKERS
Damian J. Liebhardt, UT Health San Antonio
Background: Many methods exist in ACLS guidelines for prehospital providers treating VF. VF is best treated with early defibrillation. Recurrent VF can be treated with anti-arrhythmic medications, additional defibrillations and/or double sequential defibrillation. The current evidence in prehospital cardiac arrests regarding the use of metoprolol is not as robust as the research for in-hospital episodes of VF arrests.
Methods: All cardiac arrests in a major urban EMS system from January 2014 to December 2015 were retrospectively analyzed. The data was prospectively collected in a comprehensive cardiac arrest database by the Office of the Medical Director. The data was evaluated for independent variables in the prehospital setting of recurrent ventricular fibrillation, as well as compared with basic demographic measurements.
Results: A total of 2362 cardiac arrests were identified in the selected time period. A total of 117 cases of recurrent VF were identified by review of Patient Care Reports. 11 of these were treated with metoprolol. All of those had previously received amiodarone, calcium chloride, vasopressin, and sodium bicarbonate during the course of resuscitation. Patients treated with metoprolol had uniformly fatal results, compared to the other group that had 17 out of 106 (16.03%) survive to hospital admission. Age (p=0.29 [-4.0—13.1]), initial end tidal CO2 (p=0.27 [0.20—5.7]), and total EMS epinephrine 1mg doses (p=0.087 [-2.2—0.15]) were not statistically significant variables with respective P values and 95% CI shown. Total EMS shocks delivered did reach statistical significance with p=0.02, but with 95% CI of [-2.91—0.27].
Conclusions: Data suggest less survival to hospital admission when metoprolol is administered for this subset of OHCA patients. Anti-arrhythmic therapy for recurrent ventricular fibrillation in prehospital cardiac arrests could benefit from larger, multi-center studies in order to examine the benefit, if any, of using this class of drugs in the prehospital setting.

DOES METRONOME-GUIDED CARDIOPULMONARY RESUSCITATION IMPROVE THE QUALITY OF BYSTANDER CHEST COMPRESSION
Thamir Osama Alsayed, Faisal Ahmad Katbi, University of Dammam – King Fahd Hospital of the University
Background: Cardiopulmonary resuscitation is an important lifesaving procedure that can be effectively taught to anyone. Evidence has shown us that bystander CPR and performing high quality chest compression serves as predictors of survival. The 2010 & 2015 American Heart Association guidelines emphasized on the importance high quality CPR for survival, describing it as the corner stones for the system of cares that can optimize outcome beyond ROSC. In this study we are trying to see if bystander use of metronome guided defibrillators and AEDs will help them perform higher quality CPR and thus improve patient outcome.
Methods: This is a prospective observational study in which we are trying to compare chest compression quality data (rate, depth, and fraction) with and without feedback from metronome guided CPR. Data were collected during CPR teaching classes for first year college students (preparatory program). The Data was used to calculate the in-target chest compression percentage which for the purposes of this study defined as the percentage of chest compressions reaching the 2015 AHA guidelines for rate (100 -120/min), depth (2 – 2.4 inches) and fraction (60-80%).
Results: We included data from 516 manikins simulated CPR. All chest compression were recorded through metronome guided devices looking at appropriate depth, rate and fraction then calculating the in-target chest compression percentage. Each candidate were asked to preform one cycle of CPR without receiving any feedback for the quality of their chest compression’s followed by one cycle of CPR with feedback from the metronome guided device. 30 percent of chest compression were in-target of 2015 AHA guided CPR when preformed without feedback .The in-target chest compression reached 71 percent when preformed with assisted feedback.
Conclusions: Feedback for quality of chest compression improved CPR performance. In this, study metronome guided CPR device improved the percentage of in-target chest compression’s for bystander CPR. These simulated sessions have shown that providing feedback to bystander CPR through these metronome guided Devices gave higher quality chest compression’s and thus potential for improvement of survival.

ASSOCIATION OF NUMBER OF PREHOSPITAL CPR TEAM AND SURVIVAL AFTER OUT-OF-HOSPITAL CARDIAC ARREST IN KOREA: A NATIONWIDE OBSERVATIONAL STUDY
Soo Jin Kim, Ju Ok Park, Sang Do Shin, Fire Science Research Center, Seoul Metropolitan Fire Service Academy
Background: We aimed to evaluate the association of number of prehospital CPR team and survival after out-of-hospital cardiac arrest (OHCA).
Methods: Study design was nationwide retrospective observational study and source of data were emergency medical service (EMS) run sheet and medical chart review. EMS-accessed OHCAs in Korea between 2006 and 2012 were analyzed. Exposure variables were number of prehospital CPR (PCPR) team. Number of PCPR team was varies 1 to 3 persons and also differ from province level. Primary outcome was number of PCPR team and survival to discharge. Secondary outcome was number of PCPR team and neurological good come. We figured out demographic findings and compared the survival rates between the groups regarding number of PCPR team. And also, calculated adjusted odds ratio (AORs) per number of PCPR team using multivariable logistic regression models, adjusting for potential confounders at individual levels.
Results: Of 153,078 patients with EMS-accessed OHCAs, 123,837(80.9%) patients with EMS-treated OHCAs were included. Of these, OHCA cases received with 1 to 3 person PCPR team were 18,578(15.0%), 866,654(70.0%), 18,605(15.0%), respectively. AORs for survival to discharge were 1.06(0.97-1.17) for 2 persons PCPR, 1.20(1.07-1.36) for 3 persons PCPR. AORs for neurological good outcomes were 1.26(1.07-1.48) for 2 persons PCPR, 1.63(1.33-2.00) for 3 persons PCPR.
Conclusions: Number of PCPR team was associated with survival to discharge rates and neurological good outcomes after OHCA. This result will be conjugate emergency dispatching system emergency patient such OHCA, severe trauma, etc.

DEFIBRILLATION AT 200 JOULES: A SECOND LOOK
Matthew I. Harris, Larissa Dudley, Ronald Klebacher, Ammundeep Tagore, Navin Apriyaprakai, Eric Wasserman, Robert Bauter, Mark A. Merlin, Newark Beth Israel Medical Center
Background: Patients in ventricular fibrillation (VF) or pulseless ventricular tachycardia (PVT) require early and effective conversion to a perfusing rhythm to offer the best chance at survival. Since the 1995 updates to the Advanced Cardiac Life Support (ACLS) guidelines, patients with VF or PVT have been defibrillated at an initial energy of 200 Joules (J). A review of the literature did not identify any superiority of 200J as an initial energy level when compared to 300J or 360J. We chose to evaluate rates of return of spontaneous circulation (ROSC) in our EMS system for patients in VF or PVT managed with an initial defibrillation of 200J.
Methods: A retrospective chart review of all patients with VF or PVT over a 1-year period managed with an initial energy dose of 200J. Our EMS system, MONOC (Neptune, NJ) is the largest provider of advanced life support (ALS) services in the State of New Jersey. Utilizing our electronic medical record (RescueNet, Zoll) and information from our defibrillator (LifePak 15), we extracted demographic and biometric data, interventions including defibrillation doses and medications, and outcomes.
Results: From January 1, 2014 – December 31, 2014, MONOC EMS provided ALS care to 34,545 patients. 1,192 (3.45%) of calls were for confirmed cases of cardiac arrest. 146 of these patients (12%) were found to have VF or PVT at the time of ALS response. 83 patients underwent defibrillation at an initial energy of 200J. Of these patients, only 4 (4.8%) cases resulted in successful ROSC. Notably, three of the four were witnessed cardiac arrest with early CPR. 63 of the 146 patients with VF/PVT were treated defibrillated initially with 300J or 360J, and had a notably higher rate of ROSC, at 40.4%.
Conclusions: Of eighty-three patients with VF or PVT who underwent initial defibrillation at 200J, only 4.8% of cases resulted in ROSC. We found that the use of 200J is ineffective for most patients and as such our institution has chosen to abandon the use of 200J in the management of VF and PVT. Further research is needed to identify the most appropriate initial energy level.

INTUBATION OF PREHOSPITAL PATIENTS WITH CURVED LARYNGOSCOPE BLADE IS MORE SUCCESSFUL THAN WITH STRAIGHT BLADE
Scott M. Alter, Lisa M. Clayton, Brian Walsh, Florida Atlantic University
Background: Direct laryngoscopy can be performed using curved or straight blades, and providers usually choose the blade with which they are most comfortable. Anecdotally, curved blades are often thought of as easier to use than straight blades. We seek to compare intubation success rates of paramedics using curved versus straight blades.
Methods: This retrospective chart review was performed in a hospital-based suburban ALS service with 20,000 calls per year. Patients with any direct laryngoscopy intubation attempt over an 8.5-year period were selected as subjects. Paramedics were allowed to use the blade of their choice and were told to maximize first attempt success. Prehospital medical records were reviewed. First attempt success rate and overall success rate were calculated for attempts with curved and straight blades. Differences between the two groups were calculated, along with 95% confidence intervals.
Results: 2299 patients were intubated by direct laryngoscopy. Of these, 1865 had attempts with a curved blade, 367 had attempts with a straight blade, and 67 had attempts with both. Baseline characteristics were similar between groups. For curved blade intubations, average age was 69.1 years (SD 19.2), weight was 83.8 kg (SD 28.6), and 52.8% (SD 0.50) were male. For straight blade intubations, average age was 68.7 years (SD 20.3), weight was 84.3 kg (SD 28.9), and 57.4% (SD 0.50) were male. First attempt success was 86% with a curved blade and 73% with a straight blade: a difference of 13% (95% CI: 9-17). Overall success was 96% with a curved blade and 81% with a straight blade: a difference of 15% (95% CI: 12-18). There were an average of 1.11 intubation attempts per patient with a curved blade and 1.13 attempts per patient with a straight blade (2% difference, 95% CI: – 3-7).
Conclusions: Our study found a significant difference in orotracheal intubation success rates between laryngoscope blade types. Direct laryngoscopy using a curved blade had a higher first attempt success rate as well as overall success rate of endotracheal intubation when compared to using a straight blade. Paramedics should consider selecting a curved blade as their tool of choice to maximize intubation success.

DETERMINING A NEED FOR POINT OF CARE ULTRASOUND (POCUS) IN HELICOPTER EMS
Tom Grawey, Tim Lenz, Mary Beth Phelan, Medical College of Wisconsin
Background: The use of point of care ultrasound (PoCUS) has become a helpful tool to aid in diagnosis of acute life threatening conditions in unstable patients in the emergency department (ED). The decrease in size of portable ultrasound machines allows for the use of this modality in the prehospital setting particularly Helicopter EMS (HEMS) programs. Identifying which exams would be most utilized in HEMS may contribute information important to the development of a helicopter PoCUS program that develops the sonography skills of providers on exams they will commonly use. Our objective was to determine the percentage of the HEMS patient population that would potentially benefit from PoCUS and how commonly the EFAST protocol for trauma patients or RUSH protocol for medical patients could be used on patients.
Methods: This was a retrospective chart review of one year’s worth (2015) of patient data from flights of adult patients in a mid-sized midwestern helicopter EMS system. Vital signs, information on the nature of the call and, if known, what the patient’s diagnosis was were extracted from the chart by a trained reviewer. Hypotensive patients were deemed potential ultrasound candidates. Presence of diagnosis was based off of HEMS provider charting. Hypotension was defined as systolic blood pressure less than 90. Patients that died prior to transport and cancelled calls were excluded. IRB approval was obtained from the Medical College of Wisconsin.
Results: There were 216 adult runs. Three were excluded. Of the 213 cases 100 (47%) were trauma, 51 (51%) of those had an episode of hypotension. 39 (39%) of trauma patients had hypotension with an unclear cause. 113 (53%) patients were medical, 73 (65%) of which had an episode of hypotension. 4 (3%) of medical patients had hypotension with an unknown cause. 124 (58%) patients were hypotensive and could have benefited from PoCUS.
Conclusions: This study found that over 1 in 2 HEMS patients may benefit from PoCUS, with 51% of trauma patients receiving an EFAST exam and 65% of medical patients receiving a RUSH exam. PoCUS may be particularly useful in 39% of trauma and 3% of medical patients without a diagnosis.

REPEAT NALOXONE DOSING IN PATIENTS WITH SUSPECTED OPIOID OVERDOSE
Ronald Klebacher, Matthew I. Harris, Michael Carr, Ronald Klebacher, Larissa Dudley, Bilgehan Onogul, Ammundeep S. Tagore, Eric J. Wasserman, Susmith Koneru, Robert Bauter, Mark A. Merlin, Newark Beth Israel Medical Center
Background: Advanced Life Support (ALS) providers are routinely called upon to care for patients with suspected opioid overdose (OD). Naloxone, an opioid-antagonist deliverable by an intra-nasal route, historically a drug administered by ALS providers, has now become widely available and utilized by responding law enforcement officers as well as basic life support (BLS) providers. As ALS is often requested or simultaneously dispatched to these suspected opioid overdoses, our aim was to describe the characteristics of patients requiring repeat dosing of naloxone and thus warranting ALS assessment and transport to the hospital for further management.
Methods: A retrospective chart review of patients great than 17 years old with suspected opioid overdose already treated with an initial intranasal dose of naloxone, and subsequently managed by paramedics from the largest ALS provider in the state of New Jersey from April 2014 to June 2016. We describe demographic and biometric data using descriptive statistics to identify those aspects of the history, physical exam findings and interventions associated with repeated naloxone utilization.
Results: 2,166 patients with suspected opioid OD received naloxone from ALS providers. 195 (9%) were managed after an initial dose of naloxone was provided by first responders. Of these 195 patients, Police Officers administered the first dose in 66.7% of all cases. Patients were primarily male (74.4%), Caucasian (88.2%), with a mean age of 36.4 years. 76.7% of patients were found in the home, 23.1% had a suspected mixed ingestion, and 27.2% had a previous OD. The mean Glasgow Coma Scale (GCS) was 5, and 96.9% of patients had altered mental status. 20% of patients required a 3rd dose. These patients had relatively lower GCS scores, oxygen saturations and a higher degree of tachycardia. Overall, thirteen patients were intubated, and one patient died.
Conclusions: Acute opioid toxicity can be life threatening and may require repetitive dosing to reduce morbidity and mortality. Twenty percent of patients required greater than 2 doses of naloxone. Features associated with repeat dosing of naloxone include male gender, lower GCS, tachycardia and relative hypoxia. First responders should continue to activate an ALS response after an initial dose of naloxone.

SUCCESS RATE OF PREHOSPITAL LMA PLACEMENT AFTER FAILED INTUBATION ATTEMPTS
Lisa M. Clayton, Scott M. Alter, Brian Walsh, Richard D. Shih, Florida Atlantic University
Background: Prehospital intubation continues to remain a controversial topic, with previous research indicating worse patient outcomes with any paramedic intubation attempt, let alone failed and multiple airway attempts. Studies also suggest overall prehospital intubation success rates are significantly lower compared to those performed in the emergency department. Supraglottic airway device placement is the standard fallback for failed EMS intubations and tends to be inserted successfully even after failed intubation, yet is not commonly used as the initial airway device. We sought to determine the overall and first attempt success rates of laryngeal mask airway (LMA) placement by paramedics after failed intubation attempts.
Methods: Design: retrospective chart review. Setting: a hospital-based suburban ALS service with volume of 20,000 calls per year. Subjects: All patients requiring LMA placement after failed intubation attempt over an 8.5-year period. Protocol: Prehospital medical records of patients requiring advanced airway management were reviewed. Patients with failed orotracheal intubation and subsequent LMA placement were analyzed. Overall and first attempt success rates were calculated, with 95% confidence intervals.
Results: Of 4,126 prehospital patients requiring advanced airway management, 96 had LMA placement after failed intubation. The average age of this group was 66.4 years (SD 16.6) and weight was 101 kg (SD 43). Prior to LMA insertion, 62 patients had failed direct laryngoscopy (DL), 18 had failed video laryngoscopy (VL), and 16 had failed both DL and VL. The overall success rate of LMA placement was 93% (95% CI: 87-98), with first attempt success 92% (95% CI: 86-97). There was no significant difference in overall LMA insertion success rate (92% vs. 94%) or first attempt success rate (91% vs. 91%) for DL versus VL. The average failed intubation attempts prior to LMA insertion was 1.7 (95% CI: 1.5-1.8) for DL and 1.4 (95% CI: 1.2-1.7) for VL.
Conclusions: The LMA is a successful alternative to prehospital intubation for patients with prior failed intubation attempts, whether direct or video. Given the high success rate of LMA placement on these difficult to intubate patients, EMS providers and medical directors should consider LMA as a first line option for achieving prehospital airway stabilization.

THE ROLE OF BYSTANDERS IN PROMPT CPR AND RELATED OUTCOMES AFTER OUT-OF-HOSPITAL CARDIAC ARREST
Gwan Jin Park, Kyoung Jun Song, Ju Jeong, Ki Jeong Hong, Sang Do Shin, Seoul National University Hospital
Background: Without cardiopulmonary resuscitation (CPR), cardiac arrest survival declines 5-10% for every minutes. But, there is a limit to reduce the time from collapse to Emergency medical service (EMS) arrival. This study aimed to examine the association between the time interval from collapse to CPR by EMS providers and related outcomes in patients who received bystander CPR.
Methods: A population- based observational study was conducted with out-of-hospital cardiac arrests (OHCA) of cardiac etiology who were witnessed by laypersons in Korea between 2012 and 2014. Exposure variable was the proportion of the time interval from collapse to CPR by EMS providers categorized into quartile groups: the fastest group (group 1) (time < 4 min), fast group (group 2) (4 ≤ time < 8), late group (group 3) (8 ≤ time< 15), the latest group (group 4) (15 ≤ time < 30). Primary endpoint was time interval from collapse to CPR by EMS providers and secondary endpoint were survival to discharge and survival with favorable neurological outcome (Cerebral Performance Category (CPC) 1-2). Multivariable logistic regression analysis was performed. The final model was performed to evaluate interactive effect between bystander CPR and the time interval from collapse to CPR by EMS providers.
Results: A total of 15,354 OHCAs were analyzed. Bystander CPR was performed in 8,591 (55.95%). The survival to discharge rate was 10.6% (1632) and favorable neurological outcome was 6.5% (996). In an interaction model of bystander CPR patients, AORs (95% CIs) for survival to discharge rate were 0.89 (0.66-1.20) in fast group, 0.76 (0.57-1.02) in late group, 0.52 (0.37-0.73) in the latest group compared with the fastest group. For favorable neurological outcome, AORs were 1.12 (0.77-1.62) in fast group, 0.90 (0.62-1.30) in late group, 0.59 (0.38-0.91) in the latest group.
Conclusions: Despite poor ambulance response time, patients who received bystander CPR would show favorable outcomes. Prompt delivered bystander CPR was strongly associated with increased survival.

DEFINING THE VALUE OF PULSE CHARACTER ASSESSMENT AFTER INJURY BY PREHOSPITAL PROVIDERS
Carson Cope Petrash, Preston Love, Chetan Kharod, Scott Bolleter, David Wampler, Brian J. Eastridge, University of Texas Health Science Center at San Antonio
Background: Systolic blood pressure (SBP) is a standard physiologic measure to assess and triage trauma patients in the prehospital environment and a SBP < 110 mmHg is associated with shock. However, the equipment and time to measure SBP is not always available, particularly in austere or hostile environments and mass casualty incidents. Emergency medical service (EMS) providers, both civilian and military, are trained to assess radial pulse character as a marker of hemodynamic status. The purpose of this analysis was to assess the utility of the radial pulse character to identify prehospital hypotension after trauma.
Methods: The electronic field data collection registry of a large regional trauma system was utilized to identify 135,971 transported by EMS for injury. Data collected included patient demographics, injury mechanism, SBP, and pulse character classified as normal or abnormal (weak / absent). In addition, EMS providers were identified as basic life support (BLS), advanced life support (ALS), and air medical transport (AMT). Associations were developed between radial pulse character relative to SBP and mortality.
Results: The mean SBP found associated with the subjective qualification of abnormal pulse character was 95 + / – 53 mmHg. Injured patients with abnormal pulse character had a mortality of 6.0% compared to 1.2% in those with normal pulse character (P< 0.05). The ability to discern pulse character differences was associated with the level of training of the prehospital provider. The sensitivity of AMT providers to recognize abnormal pulse character at 60 mmHg was 93%, 75 mmHg was 82%, and 90 mmHg was 66%. In contrast, the sensitivity at the same levels of SBP were 76%, 53% and 31% for ALS and 36%, 30% and 14% for BLS respectively (p<0.05).
Conclusions: These data suggest that abnormal pulse character is a useful clinical tool to rapidly assess hemodynamic status and predict mortality risk. Prehospital provider experience level played a significant role in the value of this physical assessment tool. These results highlight the value of rudimentary physical exam skills and prehospital provider training in order to optimize prehospital trauma triage and patient management decisions.

AGGRESSIVE COOLING PRIOR TO TRANSPORT FROM A LARGE MASS GATHERING EVENT RESULTS IN FAVORABLE NEUROLOGIC OUTCOMES FOR DRUG-INDUCED HYPERTHERMIA
Benjamin Allen Smith, Dale Carrison, Pete Carlo, Nicolas Rubel, University of Nevada School of Medicine, Department of Emergency Medicine
Background: Drug-induced hyperthermia is associated with a high mortality and management relies on rapid cooling to prevent multi-system organ failure and death. Available data supports the use of cold or ice water immersion for rapid cooling of these patients, but no published cases exist regarding safety or efficacy of cooling prior to transport.
Methods: A retrospective chart review was performed of patients presenting to the medical tent at a large electronic dance music festival over a single weekend with suspected drug-induced hyperthermia with a primary endpoint of neurologically intact survival. Inclusion criteria were suspicion for drug use and initial rectal temperature greater than 106 F. These patients were managed by a team of physicians, nurses, and paramedics using a pre-determined cooling protocol including cold IV fluids, IV dantroline, ice water immersion, and rapid sequence intubation. Prehospital and hospital records were abstracted by a single physician reviewer and entered into an excel spreadsheet and descriptive statistics were calculated.
Results: Over the three day event, the festival recorded approximately 140,000 attendees per night, and 1,475 participants were treated in the medical tent, of which eight patients met our inclusion criteria. The median age was 23 (IQR=2), of which 4 (50% [95% CI 21.5-78.5%]) were female. The median initial rectal temperature was 42.1°C (IQR = 0.5°C). Seven out of the eight patients (87.5% [95% CI=52.9-97.8%]) were discharged from the hospital neurologically intact and one patient developed multi-system organ failure and died.
Conclusions: A protocol of aggressive cooling with cold IV fluids, IV dantroline, ice water immersion, and rapid sequence intubation for patients with severe drug-induced hyperthermia prior to transport from a large mass gathering event resulted in favorable neurologic outcomes in this cohort. The study is limited by retrospective data collection, lack of physician reviewer blinding, small study cohort, and lack of a control group. Our data suggest that a larger, prospective controlled study is indicated to objectively assess the efficacy and safety of this protocol.

AN EVALUATION OF ADULT/PEDIATRIC AND INFANT/NEONATE END-TIDAL CO2 SAMPLING LINE UTILIZATION IN EMERGENCY MEDICAL SERVICES
Ryan R. LaGrange, Stephen E. Taylor, Matthew J. Chovaz, Brody School of Medicine, East Carolina University
Background: Waveform capnography (EtCO2) has become a standard of care in the prehospital setting due to its ability to rapidly identify correct endotracheal (ET) tube placement and predict return of spontaneous circulation (ROSC). According to the manufacturer an infant/neonate EtCO2 sampling line is recommended for ET tube sizes ≤ 4.5 mm versus an adult/pediatric sampling line for ET tube sizes ≥ 5.0 mm. Given the importance of capnography data, it is crucial that the correct sizes are being utilized in order to ensure accurate measurements. Our objective was to determine if EMS agencies are following the manufacturer recommendations for use of the infant/neonate EtCO2 sampling line for ET tube sizes ≤ 4.5 mm.
Methods: EMS agencies in each of the 100 counties in North Carolina were contacted by phone to complete the survey. Survey questions focused on capnography use, type of equipment used, resuscitation protocol in pediatric patients, and whether the EMS agency carries one or both EtCO2 sampling lines. Agencies that carry both adult/pediatric and infant/neonate EtCO2 sampling lines were also asked which ET tube sizes each device is used for.
Results: The response rate for the survey was 76% (76 of 100). All 76 EMS agencies (100%) use EtCO2 yet only 12 agencies (15.8%) currently use both adult/pediatric and infant/neonate EtCO2 sampling lines. Many agencies carrying both sampling lines expressed uncertainty about when one device was indicated over the other. Twenty-five agencies (32.9%) also stated that an EtCO2 reading of ≤10 mmHg influenced the length of their pediatric resuscitations.
Conclusions: A majority of EMS agencies in North Carolina use EtCO2 yet only a small percent carry both adult/pediatric and infant/neonate EtCO2 sampling lines. Using the wrong sampling line may give inaccurate measurements, which could potentially affect patient treatment and outcome. This study only represents the current practice of EtCO2 sampling line usage in the prehospital setting. Further research is needed to determine whether it is safe to use adult/pediatric EtCO2 sampling lines to monitor intubated infants/neonates, or whether EMS agencies should use the infant/neonate sampling line.

INTERACTIVE EFFECT OF HYPOXIA ACROSS SHOCK AT THE SCENE ON HOSPITAL MORTALITY AND DISABILITY IN SEVERE TRAUMA
Minwoo Kim, Sang Do Shin, Department of Emergency Medicine, Seoul National University College of Medicine
Background: Trauma is one of the biggest public health issues in several countries. Hypoxia in trauma is the key risk factor for outcome. It is unclear whether effect size of hypoxia is different according to shock status at the field for hospital mortality and disability.
Methods: Emergency medical services (EMS)-treated severe trauma (ST) patients who had abnormal revised trauma score at the field or positive criteria by the prehospital trauma triage scheme for transport to trauma center were analyzed among patients injured by five mechanisms (traffic accident, fall, collision, penetrating, and machinery injury) and transported by EMS in 2012 and 2013 in 10 provinces in Korea. Exposure variable was hypoxia (<94%) measured by EMS at the field. Primary and secondary end points were hospital mortality and disability measured by Glasgow outcome scale. Multivariable logistic regression was used for hospital outcomes adjusting for confounders (age, sex, Charlson co-morbidity, mechanism, elapsed time interval, mental status at the field, shock, injury severity score (ISS), and etc.) to calculate the adjusted odds ratios (AOR) with 95% confidence intervals (95% CIs). To compare the effect size of the hypoxia by shock (SBP<90 mmHg) and non-shock status, interaction term (hypoxia*shock) was added.
Results: Total 17,406 EMS-ST patients were analyzed. 43.2% of patient had higher ISS than 9. Total 2598 (14.9%) died and 3292 (18.9%) were disabled at discharge. 16.8% showed hypoxia at the field. 35.7% in hypoxia patient died while 10.7% in non-hypoxia patients were dead (p<0.0001). 38.3% in hypoxia patient were disabled while 15.0% in non-hypoxia patients were disabled at discharge (p<0.0001). AOR of hypoxia was 1.96 (1.74-2.21) for hospital mortality and 1.42 (1.27-1.58) for disability. In interaction model, AORs for hospital mortality in shock group and non-shock was 2.71 (2.35-3.12) and 1.70 (1.56-1.84), respectively. AORs for disability in shock group and non-shock was 1.59 (1.39-1.81) and 1.31 (1.21-1.41), respectively.
Conclusions: Hypoxia in the field was the significant risk factor for hospital mortality and disability in EMS-ST patients. The effect of hypoxia was significantly different according to shock status for hospital mortality and disability. In shock patients, the hypoxia was associate with much higher mortality and disability.

IS THE PRESENCE OF HYPOGLYCEMIA IN PREHOSPITAL SEIZURE PATIENTS A MYTH?
Don Eby, Jenn Robson, Melanie Columbus, Southwest Ontario Regional Paramedic Base Hospital Program, London, Ontario
Background: Conventional wisdom states that hypoglycemia is frequently present during a seizure or is a ‘cause’ of seizures. Recent literature disputes this. The purpose of this study was to determine the frequency of hypoglycemia in patients identified as having “seizure” listed as the primary or final problem code in Ambulance Call Reports from a large regional paramedic base hospital program.
Methods: Retrospective analysis of a database of ambulance call reports (ACRs) for 2 years (January 1, 2014 to December 31, 2015). All 5,854 ACRs with paramedic determined primary or final problem codes of “seizure” were identified from a database of all calls performed by 8 municipal paramedic services covering a total urban and rural population of 1.4 million. Municipal paramedic services used iMedic electronic ACRs. A 10% sample from each year was generated using a random number table. ACRs were manually searched and data extracted onto spreadsheets. Results were analyzed and described using frequencies and summary statistics.
Results: A total of 582 calls were analyzed. 430 (73.9%) patients were adults and 152 (26.1%) were peadiatric (age <18). A blood sugar was determined in 501/582 (86.1%) of all calls, adults 388/430 (90.2%), peadiatric 113/152 (74.3%). The GCS, when measured, was 15 in 280/575 (48.7%) cases. Seizures were witnessed by paramedics in 47/582 (8.1%) calls, adults 33/430 (7.7%), paediatric 14/152 (9.2%). In calls were paramedics witnessed a seizure a blood sugar was determined 36/47 (76.6%) of the time, adults 25/33 (75.8%), paediatric 11/14 (78.6%) Hypoglycemia (BS < 4.0 mm/L) was found in 3 cases when BS was checked – overall 3/501 (0.6%), adults 1/388 (0.3%), paedatric 2/113 (1.3%). Case 1 – age 12mon, GCS 13, BS 3.9mm/L. Case 2 – age 22 mon, GCS 15, BS 3.9mm/L, Case 3 – age 70yr, GCS 12, BS 3.8mm/L.
Conclusions: Hypoglycemia was rarely found in patients who had a prehospital seizure. When present, hypoglycemia was unlikely to be the cause of the seizure. The routine determination of blood sugars in all patients who have had a seizure prior to paramedic arrival should be reconsidered.

A COMPARISON OF VIDEO LARYNGOSCOPY TO STANDARD LARYNGOSCOPY IN EXPERIENCED PARAMEDICS
Bryan E. Bledsoe, Darren Mahoney, University of Nevada, Reno School of Medicine
Background: Endotracheal intubation has always been an essential paramedic skill. With the advent of video laryngoscopy paramedics now have an additional tool to facilitate endotracheal intubation.
Numerous video laryngoscopy devices are available. The VividTrac (Vivid Medical, Palo Alto, CA) is a novel disposable video laryngoscope that provides real-time images on a computer or smartphone. Our objective was to determine whether the VividTrac video laryngoscope is an effective device for endotracheal intubation by paramedics.
Methods: The study was declared exempt by the University of Nevada, Reno Institutional Review Board. Paramedics attending a mandatory educational session were asked to voluntarily participate in the study. Two intubation manikins (Laerdal Airway Management Trainer, Wappingers Falls, NY) were used for the protocol. Subjects first attempted standard endotracheal intubation on one manikin followed by endotracheal intubation with the VividTrac. The subject then moved to the second manikin and repeated the protocol. The subject was monitored by an independent observer and time intervals recorded on the research datasheet.
Results: 15 subjects made 32 intubation attempts using standard endotracheal information and 32 intubation attempts using the VividTracâ. The cohort consisted of experienced paramedics with 13.4 (Range: 1-23) years of experience. The overall intubation success rate for standard endotracheal intubation was 100% versus 91% for the VividTrac. The difference in the time to the first visualization of the vocal cords was faster for standard intubation (2.6 versus 3.7 seconds, p=0.01). The difference in the time to attempted endotracheal tube placement was faster for standard intubation (6.2 vs. 9.7 seconds, p=0.01). The difference in time to first ventilation following endotracheal tube placement was similar (12.3 vs. 15.2, p=0.32). The quality of cord visualization was similar between the two techniques with an average Mallampati score (1.4 vs. 1.4, p=0.41) and percentage of glottic opening (POGO) score (93.4% vs. 91.7%, p=0.27).
Conclusions: The VividTrac video laryngoscope system demonstrated similar, but not superior, performance to standard endotracheal intubation in a cohort of experienced paramedics.

A DESCRIPTIVE ANALYSIS OF PREHOSPITAL REFRACTORY VENTRICULAR FIBRILLATION
Andrew Schappert, Brandon Chau, Aaron Leung, Kristine VanAarsen, Matthew Davis, Ontario Regional Base Hospital Program, London Health Sciences Centre
Background: When ventricular fibrillation (VF) is unable to be terminated with conventional external defibrillation, it is classified as refractory VF (RVF). Currently, there is little evidence for the treatment of RVF, with double sequential external defibrillation (DSED) being a potential novel treatment. The goal of this investigation is to provide a descriptive analysis of patients in an urban EMS system with RVF as well as to determine how often DSED may have been utilized as a potential treatment option in these patients.
Methods: Ambulance Call Records (ACRs) of all patients with out-of-hospital cardiac arrest (OHCA) between March 1, 2012 and April 1, 2016 were reviewed. All cases in which one or more defibrillations occurred were examined and cases of RVF (≥5 consecutive defibrillations) were determined by manual review. Descriptive characteristics and clinical variables were compared for RVF versus non-RVF patients using chi-square or t-test analyses where appropriate.
Results: 645 OOHCA occurred during the study period. 193 (29.9%) arrests involved at least one analysis of VF. Ninety (13.9%) cases of OOHCA were identified as RVF. Thirty-four (37.8%) of the RVF arrests had two or more manual defibrillators on scene. There was no difference between the non-RVF and RVF groups in age (65.02 vs. 67.28, p=0.313) or gender (p=0.132). There was no difference between time from EMS activation to arrival at patient (9.00 min vs. 8.73 min, p=0.610) and time from EMS activation to time at first defibrillation (11.31 min vs. 12.63 min, p=0.122) between the two groups. There was no difference in the incidence of bystander CPR between groups (p=0.840).
Conclusions: In the study group the incidence of RVF was 14% and nearly half of all OOHCA involving VF were refractory in nature. Just under half of the RVF cases had the potential to have DSED utilized in the prehospital setting. Although there were no identified variables associated with prehospital RVF, next steps will include a review of the hospital charts to determine the incidence of sustained RVF in transported patients, survival to admission, survival to hospital discharge and potential co-morbidities that may increase the likelihood of RVF.

PREHOSPITAL RAPID SEQUENCE AIRWAY: A SERIES OF 57 CASES
Sean Patrick O’Brien, Darren Braude, Mike Torres, University of New Mexico
Background: Rapid Sequence Airway (RSA) is an airway management technique that combines the pharmacology and preparation of Rapid Sequence Intubation (RSI) with the immediate planned placement of an extraglottic device. Publications on this technique have been limited. The purpose of this study was to determine the overall success rate of RSA in the prehospital setting. A secondary goal was to determine the aspiration rate associated with RSA.
Methods: This is an IRB-approved retrospective cases series of prehospital adult and pediatric RSA cases between 2007-2016 from one ground EMS system and one air transport system in New Mexico. Overall success of RSA was determined by the ability to oxygenate and ventilate the patient effectively. Aspiration data was obtained via hospital chart review and was determined to be present if radiologic evidence (CXR, Chest CT) of aspiration was present within 48 hours of hospital presentation.
Results: During the study period 57 patients from 1 to 73 years of age and 10 to 125 kg underwent prehospital RSA. The LMA-Supreme was used on first attempt in 30 cases, the King LT in 18 cases, the LMA-Unique in 8 cases and a Combitube in 1 case. RSA was successful on first attempt in 46 of 57 cases (81%) and overall successful in 52 of 57 cases (91%). Five cases required intubation due to inability to achieve adequate ventilation or oxygenation. Aspiration data was available on 42 patients of whom 3 (7%) were found to have evidence of aspiration within 48 hours.
Conclusions: Overall success rate for RSA was high and aspiration rates were low considering this was a high risk population followed for 48 hours and all underwent secondary RSI procedures in the Emergency Department. RSA may be a viable option for emergency airway management in the prehospital setting and warrants further study.

REDUCED-DOSE KETAMINE FOR EMERGENT CHEMICAL RESTRAINT
Bjorn K. Peterson, Kari Haley, Kevin Torkelson, Aaron Burnett, Regions Hospital EMS
Background: Ketamine is an effective medication used for chemical restraint in emergent settings. For managing patients with severe agitation, time is critical in order to prevent injury. Ketamine can be administered intramuscularly with a rapid onset and excellent sedation, allowing for safe provision of care. The optimal dose for emergent chemical restraint has not been studied. Based on our experience, we hypothesize that a standard dose of 250mg IM will reduce the need for intubation without increasing the need for additional sedation in order to prevent injury.
Methods: As a retrospective chart review, data was obtained by querying EMS patient care records for two EMS agencies. Included were adult patients who received intramuscular ketamine and transported to one of two affiliated hospitals. Excluded were those who received ketamine for reasons other than chemical restraint. Hospital charts were then reviewed to determine dose administered, the need for a repeat dose, the need for additional sedation by EMS or within 30 minutes of hospital arrival, and whether the patient required intubation.
Results: 145 patients were included in final analysis. 11 patients received a single 500mg dose of ketamine. The remainder received 150 to 300mg (mean 265mg). 22 patients required a second dose (overall mean 300mg). Of the 29 patients receiving > 300mg, 7 required intubation, whereas 23 of 116 patients receiving ≤ 300mg required intubation, for an overall intubation rate of 20.7% (similar research at higher doses resulted in a 30% intubation rate). Of the 134 patients who received ≤ 300mg initially, 73 (54.5%) required additional sedation either by EMS providers or within 30 minutes of hospital arrival. When compared with prior similar research at higher doses, this demonstrated a statistically significant increase in the need for additional sedation (p < 0.0001). There were no significant injuries documented to responders or bystanders. There was one death noted due to methamphetamine toxicity.
Conclusions: Although there was an increased need for additional sedation, a trend toward less frequent intubation suggests improved patient safety. With an educational focus on appropriate medication titration, we believe 250mg is a reasonable initial dose for time-sensitive chemical restraint.

CREATIVE SIGNALS ANALYSIS OF MEDIA TECHNOLOGY FOR RECOGNIZING CARDIAC ARREST
Patrick Chow-In Ko, Yu-Chen Lin, Shun¬Jie You, Wei-Ting Chen, Sot Shih-Hung Liu, Shyh-Shyong Sim, Yuan-Hsiang Lin, Department of Emergency Medicine, National Taiwan University Hospital
Background: Recognition of cardiac arrest with checking carotid pulse is less than a half correct by the public. Poor recognition of cardiac arrest or patient of agonizing situation delays early bystander cardiopulmonary resuscitation (BCPR) that should be critically provided in the first five minutes before emergency ambulance arrival. Globally we still lack of effective technology to assist better recognition of cardiac arrest to facilitate early BCPR and public access defibrillation (PAD). In this study, we aim to innovate a video signals analysis tool to assist recognition of cardiac arrest.
Methods: We designed an innovative skill algorithm for transforming and analyzing the signals of the video recordings filmed with mobile smartphone for part of human body. Fast Fourier Transform (FFT) signals were evaluated in our skill algorithm. The time length for each video recording was fifteen seconds, which was filmed within the first five minutes after cardiac arrest witnessed in the intensive care unit. This signal analysis skill algorithm was applied on the video recordings of cardiac arrest patients and compared with that of normal volunteers.
Results: We applied our skill algorithm analysis on video segments from twenty cardiac arrest patients (asystole for 18 cases, ventricular fibrillation for 2 cases) and twenty non-arrest volunteers (median heart rate 74/min, IQR: 65- 88/min), matched in age and sex. We innovated a mathematic formula to calculate a value (we called it Slope Alfa) mainly from the cluster of FFT signals evaluated by the skill algorithm. The Slope Alfa value (Mean, [SD]) of cardiac arrest patients was significantly different from the value of non-arrest volunteers (0.14, [0.09] vs. 1.96, [0.37], p<0.01). The results also indicated a tendency that for cardiac arrest patient the Slope Alfa would be less than 1.0.
Conclusions: The skill algorithm we innovated for smartphone video signals analysis may successfully recognize patient after cardiac arrest. Further integration of this technology with mobile devices would provide the general public an easily accessible tool for cardiac arrest recognition and early chest compressions.

AN ANALYSIS OF DISASTER MEDICINE TRAINING FOR MEDICAL STUDENTS
Daniel Shocket, Erik Rueckmann, Courtney Jones, Molly McCann, University of New Mexico
Background: Every hospital is required to have a disaster plan in place with many physicians being a part of the process but the majority lack formal training in the subject. In 2003, the AAMC and the CDC constructed recommendations for medical school training in Disaster Medicine, but this has been instituted in very few programs nationwide. Addressing this deficit, a pilot curriculum of Disaster Medicine was instituted within the University of Rochester School of Medicine. The objective of our study is to analyze the difference between pre and post course confidence and exam scores of those students who completed the curriculum. Our hypothesis is that students will both be more confident in their knowledge of Disaster Medicine and will score higher on the knowledge exam after completing the course.
Methods: Survey data was collected as part of a quality improvement project associated with the curriculum in Disaster Medicine. The curriculum involved a flipped classroom concept with pre- coursework and a 2-hour session with tabletop scenarios. Pre and post course surveys were completed in courses from August 1, 2015 to January 31, 2016. The data gathered by the surveys were analyzed retrospectively using Wilcoxon Sum Rank and Wilcoxon Test of the Median.
Results: 61 medical students completed both the Disaster Medicine training and both pre and post course surveys. Overall confidence, measured as the difference between Pre and Post-course confidence scores, was statistically significant (p <0.0001), indicating that participants had more confidence after the completion of the course. Participants scored significantly higher on the Post-Course test than on the Pre-Course test (p <0.0001). Overall Pre and Post median test scores were 5 and 8, respectively. Surprisingly, students with prior Disaster Medicine and EMS experience had lower Pre-test median scores (4 vs. 5). Students going into Emergency Medicine had higher median Pre-test scores (6 vs. 5).
Conclusions: Students had significantly increased confidence scores and knowledge based exam scores after completing the curriculum. It is our hope that these findings may ultimately translate into these students feeling more prepared to assist in a meaningful way during disasters throughout their careers as physicians.

SALT TRIAGE TRAINING FOR SCHOOL PERSONNEL
Daniel H. Celik, Francis R. Mencl, Scott T. Wilber, Jennifer A. Frey, Lisa Kurland, Michel Debacker, Summa Health System
Background: To determine if non-medical schoolteachers and other school personnel can be trained to understand and apply SALT (Sort, Assess, Lifesaving Interventions, Treat/Transport) triage methods and lifesaving interventions after brief training. The investigators predicted that subjects can learn to triage with accuracy similar to medical professionals, and that subjects can pass an objective structured clinical exam (OSCE) evaluating hemorrhage control.
Methods: Local public and private school personnel were eligible to participate in this prospective observational study. Investigators recorded subject demographic information and prior medical experience. All subjects received a 30-minute lecture on SALT triage and a brief lecture and demonstration of hemorrhage control and tourniquet application. A slide show-based test with descriptions of mass casualty victims was administered immediately after training. Subjects independently categorized the victims as dead, expectant, immediate, delayed, or minimal. Subjects also completed an OSCE to evaluate hemorrhage control and tourniquet application using a mannequin arm.
Results: Personnel from two schools completed the study. Fifty-nine subjects were from a private school that enrolls early childhood through grade 8, and 45 were from a public middle school (n=104). The average subject age was 45 years and 69% were female. Approximately 81% were teachers and 87% had prior cardiopulmonary resuscitation training. Subjects correctly triaged an average (± standard deviation) of 19.9 ± 2.6 of the 25 victims (79.5%). Ninety-six of the subjects passed the tourniquet application OSCE (92.3%).
Conclusions: After two brief lectures and a short demonstration, school personnel were able to triage descriptions of mass casualty victims with an overall accuracy similar to medically trained professionals, and most were able to apply a tourniquet correctly. Opportunities for future study include integrating high-fidelity simulation and mock disasters, evaluating for knowledge retention, and exploring the study population’s baseline knowledge of medical care, among others.

RESPIRATORY DISTRESS: CURRENT EVIDENCE-BASED RECOMMENDATIONS FOR PREHOSPITAL CARE
Melody Jean Glenn, Bennett Lee, Karl Sporer, Dipesh Patel, University of California, San Francisco
Background: Because emergency medical services (EMS) protocols vary widely across the United States, national organizations have collaborated to create evidence-based guidelines to guide standardization and quality improvement. However, these guidelines either do not explain the level of evidence behind their recommendations, or only apply to a limited number of conditions. Our objective was to develop evidence-based guidelines for the prehospital treatment of adult respiratory distress by advanced life support providers with explanations of the level of evidence currently available, and compare to the protocols in each of the 33 local EMS agencies (LEMSA) in California for quality improvement.
Methods: Two review authors independently performed a literature review of research evaluating prehospital treatment of respiratory distress and extracted data from studies meeting the inclusion criteria. We rated the quality of evidence using GRADE, which was then used to form our guidelines for 9 PICO questions.
Results: The following protocol components that we analyzed, and their impacts and usage (by percent of total LEMSA’s) in protocols are as follows: Oxygen titration in COPD (3%), corticosteroids (0%)/albuterol (100%) in suspected bronchospasm, and non-invasive positive pressure ventilation in undifferentiated respiratory distress or suspected CHF (97%) have been shown to improve outcomes. Ipratropium in bronchospasm (45%) did not affect outcomes. There is insufficient evidence around the use of non-invasive positive pressure ventilation in suspected bronchospasm (97%) or nitrates in suspected congestive heart failure (100%). Furosemide is likely harmful in suspected congestive heart failure (12%).
Conclusions: There is a paucity of prehospital research about the proper management of respiratory distress. When the interventions in our evidence-based guidelines are compared to the 33 CA LEMSA protocols for respiratory distress, there is wide variation.