key: cord-0331012-6o2hccgw authors: Strausbaugh, Robert; Cucchiara, Antonino; Dow, Michael; Webb, Sara; Zhang, Jielai; Goode, Simon; Cooke, Jeff title: Finding Fast Transients in Real Time Using Novel Light Curve Analysis Algorithm date: 2021-09-27 journal: nan DOI: 10.3847/1538-3881/ac441b sha: d65b79ab7e5209b8fd29c1b72f6fb5f564a86585 doc_id: 331012 cord_uid: 6o2hccgw The current data acquisition rate of astronomical transient surveys and the promise for significantly higher rates during in the next decade necessitate the development of novel approaches to analyze astronomical data sets and promptly detect objects of interest. The Deeper, Wider, Faster (DWF) program is a survey focused on the identification of fast evolving transients, such as fast radio bursts, gamma-ray bursts, and supernova shock breakouts. It employs a multi-frequency simultaneous coverage of the same part of the sky over several orders of magnitude. Using the Dark Energy Camera mounted on the 4-meter Blanco telescope, DWF captures a 20 second g-band exposure every minute, at a typical seeing of ~ 1"and an airmass of ~ 1.5. These optical data are collected simultaneously with observations conducted over the entire electromagnetic spectrum - from radio to gamma-rays - as well as cosmic ray observations. In this paper, we present a novel real-time light curve analysis algorithm, designed to detect transients in the DWF optical data; this algorithm functions independently from, or in conjunction with, image subtraction. We present a sample of fast transients detected by our algorithm, as well as a false-positive analysis. Our algorithm is customizable and can be tuned to be sensitive to transients evolving over different timescales and flux ranges. The field of transient astronomy is booming, with several successful completed, ongoing, and planned optical surveys that will come online in the coming years, specifically designed to find transient phenomena. Among the former, the Palomar Transient Factory/Intermediate Palomar Transient Factory (PTF/iPTF, Rau et al. 2009; Law et al. 2009 ) the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS, Magnier et al. 2013) , the Sloan Digital Sky Survey (SDSS) Supernova Survey (Wolf et al. 2016) ,the Asteroid Terrestrial-impact Last Alert System (ATLAS) Allsky Survey (Tonry et al. 2018) , the Gaia Survey (Gaia Collaboration et al. 2016) ,the Zwicky Transient Facility (ZTF, Bellm et al. 2019; Graham et al. 2019 ) the Dark Energy Survey (DES, Dark Energy Survey Collaboration et al. 2016) the All Sky Automated Survey for Supernovae (ASAS-SN, Kochanek et al. 2017 ) and the Transiting Exoplanet Survey Satellite (TESS, Ricker et al. 2015) have provided a census of a large variety of supernovae (SNe), tidal-disruption events, and exo-planet confirmations. In the latter category, the Vera C. Rubin Observatory and Nancy Grace Roman Telescope will push our understanding of the transient sky towards deeper limits and longer wavelengths. An overview of the field of view (FoV), depth, and cadence of these surveys can be found in Table 1 . In conjunction with some of these optical surveys, gravitational wave (GW) detectors like the Advanced Laser Interferometer Gravitational- Note-The field of view, depth, and cadence of notable past, present, and future transient surveys. We note that surveys with a have or will at some point operate at cadences shorter than those listed: e.g., the Deep Drilling fields for Rubin ) and fast cadence search in Gaia (Wevers et al. 2018) The TESS broadband b wavelengths span Rc-, Iz-, and z-bands. c The field-of-view quoted for DWF is the "on-sky" area of the science CCDs and not the footprint FoV that includes CCD gaps that is typically listed. wave Observatory (aLIGO, LIGO Scientific Collaboration et al. 2015) and Virgo (Acernese et al. 2015) , and neutrino detectors such as IceCube (IceCube Collaboration 2005) and the Baksan Neutrino Observatory (Kuzminov 2012 ) have ushered us in a new era of multi-messenger transient astronomy. We expect that the discovery of new and exciting transient phenomena will continue at higher pace thanks to the Vera C. Rubin Observatory/Legacy Survey of Space and Time (LSST, Ivezić et al. 2019 ): the telescope is being constructed in Cerro Pachón, Chile (with a planned first light date in 2022 before commencing operations in 2023), and aims to image the sky in Wide, Fast, Deep mode at a depth of g ∼ 24 every 3 days (LSST Science Collaboration et al. 2017) . The Nancy Grace Roman Telescope/Wide-field Infrared Space Telescope (WFIRST, Spergel et al. 2015) will cover an area of 9 square degrees at an average depth of J ∼ 25 with a cadence of 5 days in a proposed medium depth supernova survey when it is launched on its scheduled date in 2025 (Akeson et al. 2019; Hounsell et al. 2018) . We summarize the characteristics of these planned instruments in Table 1 . In this section we briefly summarize some of the seminal discoveries that the next-generation of surveys will look to build upon. For example, Pan-STARRS results include the outburst from a SN progenitor one year before its explosion (Fraser et al. 2013 ), a lack of super-luminous SNe with light-curves compatible with pair-instability models (Nicholl et al. 2013) , and the first interstellar asteroid detection (de la Fuente Marcos & de la Fuente Marcos 2017). Similarly, the most recent SDSS survey results include the detection of baryonic acoustic oscillation measurements (Alam et al. 2017) , evidence for the Epoch of Reionization around z ∼ 6 (Becker et al. 2001) , high redshift (z > 5.6) quasars , and indirect dark matter detection via weak-lensing (Fischer et al. 2000) . And finally, in the recent years PTF/iPTF/ZTF have further enhanced our knowledge on tidal disruptions events (TDEs, Hung et al. 2018; van Velzen et al. 2021) , gamma-ray burst (GRB) orphan afterglows (Cenko et al. 2015; Ho et al. 2020) , and hosts of novel SNe (Arcavi et al. 2010; Goobar et al. 2017; Kasliwal et al. 2010; Ofek et al. 2010; Horesh et al. 2012; Cao et al. 2013; Ofek et al. 2013) . Due to the design of these facilities and their observational strategies, there is still a lack of discoveries of fast ( 1 hour) and faint (m g 21) transients (see Figure 3 right panel). There are several science cases that benefit from the fast Laher et al. (2018) . Comparison of the field of views of various astronomical surveys. The DWF program uses the same telescope as the DES, the Blanco 4-meter scope at Cerro Tololo Inter-American Observatory in Chile. Right: The characteristic timescales and brightness of transients within the DWF detection phase space. Adopted from Cooke et al. (2022, in prep.) . identification (and potential classification) of rapidly varying transients with simultaneous observations spanning the electromagnetic spectrum. Fast radio bursts (FRBs) are a class of objects characterized by either single, or repeating radio bursts evolving on sub-second time scales; finding emission at frequencies other than radio is important toward understanding the progenitors and ignition mechanism(s) for these events. The first emission other than radio observed to accompany an FRB was recently detected from a magnetar in the Milky Way at X-ray frequencies; although the bursts are weaker in the radio than extragalactic FRBs, this event suggests that magnetars may be the progenitors of at least some FRBs (Andersen et al. 2020; Bochenek et al. 2020; Lin et al. 2020) . Also, due to a lack of sufficient early time observations of SNe, the shock breakout mechanism for type Ia SNe and the ignition mechanisms for core-collapse SNe (CC-SNe) are still not well understood. Type Ia SNe are critical for determining cosmological distance scales, and are used to measure the accelerated expansion of the universe (Riess et al. 1998 ). Despite their use as standard candles, the shock breakout mechanism for type Ia SNe is not conclusively understood (Nomoto 1982) . Identification and follow-up of a type Ia SNe within the first day, building on early-time observations of Type Ia Supernovae by ZTF (Miller et al. 2020 ) and TESS (Fausnaugh et al. 2021) , can lead to an understanding of the shock breakout mechanism including the detection of a cooling tail, indicative of a delayed detonation transition (Piro et al. 2010) . More importantly, the nature and progenitors of SNe Ia is unclear. The detection of UV/optical bursts from SNe Ia ejecta colliding with a companion star (e.g., Cao et al. 2015) helps to secure the single-degenerate progenitor model for some fraction of events; these detections must occur on short timescales. A better understanding of type Ia SNe could be the key towards resolving the more than 3σ discrepancy between measurements of the Hubble Constant using the cosmic microwave background (e.g. by Planck, Hinshaw et al. 2013; Planck Collaboration et al. 2020 ) and measurements made using the cosmic distance ladder method (e.g., Riess et al. 1998 Riess et al. , 2016 , on which type Ia SNe are a vital rung. The study of CC-SNe is important in understanding the ends of the life cycles of massive stars and is believed to be one of the drivers of nucleosynthesis of elements heavier than iron (Arnett & Clayton 1970) , in addition to collapsars Siegel et al. (2019) . Early detection and follow-up of CC-SNe can distinguish between various theorized ignition mechanisms such as: magneto-rotational instabilities (Akiyama et al. 2003) , standing accretion shocks (Blondin et al. 2003) , acoustic shocks (Burrows et al. 2007) , and QCD phase transitions (Sagert et al. 2009 ). The first GRB orphan afterglow may have been detected in the radio (Law et al. 2018 ), but was discovered at such a late time after its prompt emission, that deep optical follow ups resulted in only upper limits. Searches are also being performed by ZTF (Andreoni et al. 2020a; Ho et al. 2020) for orphan afterglows and kilonovae. One candidate orphan afterglow (Coughlin et al. 2020 ) was later associated with prompt γ-and X-ray emission from GRB 201103B (Svinkin et al. 2020) ; the optical component was reported first by ZTF, instilling confidence in the veracity of their method in detecting and identifying orphan afterglows. The study of orphan afterglows would allow us to calculate GRB jet angles, as well as the true GRB rate (Rhoads 1997) . Although we have made many discoveries with surveys such as PTF, ZTF, and ASAS-SN, and plan on continued success with the Vera C. Rubin Observatory and Nancy Roman Space Telescope, there is a void in the parameter space for fast, faint transients that remains unfilled. Our understanding of GRB orphan afterglows, short GRBs, FRBs, SN ignition mechanisms and shock-breakouts, and electro-magnetic counterparts to gravitational wave events can be greatly enhanced by detecting these transients in real-time across several segments of the electromagnetic spectrum. Due to the rarity of these events, the use of wide-field facilities is needed. In this paper we present an automated, customizable fast transient identification algorithm centered mainly on Deeper Wider Faster program (DWF) source light curve analysis. We summarize the DWF program in Section 2 and describe the DWF data sets analyzed in this work. In Section 3 we motivate the need for a transient detection algorithm that is independent from image subtraction, and present the elements of a novel fast transient detection algorithm. The results of running the algorithm on both real-time DWF data and later-time further processed data sets is presented in Section 4. Finally in Section 5, we describe how this algorithm will be deployed in future DWF runs and how it can be used with data from other surveys. Table 1 ). We note that the 1 minute cadence and 20s exposure times are due to overheads in inefficient readout times. In conjunction with optical observations carried out with DECam on the 4m Blanco Telescope in Chile, wide-field groundand space-based observatories spanning the entire electromagnetic spectrum are coordinated either to simultaneously collect data on the same region of the sky, or coordinated to trigger rapid (or later-time) follow-up of transient sources. The DWF program is carried out for one week twice annually. Data collected by DECam for real-time analysis is highly compressed (Vohl et al. 2017) , to minimize transfer speed, and sent directly from the summit on Cerro Tololo, Chile to the OzSTAR supercomputer at Swinburne University of Technology in Australia for processing and analysis. In addition, these data are also transferred using lossless compression and fully processed by a modified version of the photpipe NOAO Community Pipeline (Rest et al. 2005; ) at a later time. The DWF program, like many other transient surveys (e.g., PTF and SN Legacy Survey, among others, Cao et al. 2016; Perrett et al. 2010) , relies on an image subtraction pipeline (Mary pipeline, Andreoni et al. 2017) to detect potential sources of interest in real time. A ranked list of candidates is presented to astronomers and volunteers for further visual inspection of image cutouts (small fraction of the DECam FoV centered on a single detected source) and light curves using the interactive tools described in Meade et al. (2017) . Here, we describe the DWF data stream in more detail, the light curve creation process, and the final inputs that will be fed into the transient identification algorithm. The data collected by DECam for the DWF program is unique among transient surveys in its cadence, and therefore offers the potential for "first of their kind" discoveries. For DWF, the 4-meter Blanco telescope, on which DECam is mounted, collects continuous 20s exposures at ∼ 1 minute cadence, when including readout time. In each 20s exposure, DECam reaches a depth of g ∼ 23 under normal DWF observing conditions, ∼ 1.0 arcsecond seeing at ∼ 1.5 airmass. The slightly higher than ideal airmass is due to the visibility requirements for simultaneous observations in the radio, conducted by telescopes in either Australia or South Africa, and telescopes operating at other wavelengths in the Antarctic, North America, and other locations, including space-based telescopes. The g-band is selected as the main observing band for DWF as DECam sensitivity is ∼ 0.5 magnitudes deeper than in redder filters, many fast bursts are hot and blue, and DWF target fields are typically at low Galactic extinction. Most DWF target fields have template reference images taken prior to the run in multiple filters. In addition, and if there are no reference images (i.e., for newly discovered FRB or short-GRB fields), the target fields are observed either at the start or end of the night (or both) in other filters, typically r-and i-bands to determine source colors. The DWF program collects data with DECam over a ∼ 3 deg 2 FoV. This wide-field is covered by the 62 individual DECam CCDs. The data from each CCD is saved as an extension in a multi-extension fits file. These data are processed and analyzed in two ways. Firstly, for real-time, or fast analysis, the image files are 'lossy', compressed at the summit using the method described in Vohl et al. (2017) , and sent to OzSTAR supercomputer at Swinburne University of Technology for data analysis. Data transfer from the Cerro Tololo summit in Chile to Australia is too slow to enable data processing, analysis, and transient candidate identification within minutes, which is necessary for fast transients. The lossy compression is tunable to the speed of the internet and can speed up transfer by compressing the data up to ∼20× and still enable detection of ∼ 95% of the transients. Furthermore, to enable fast identification and rapid response follow-up triggers, these data are 'fast' processed in parallel on the OzSTAR supercomputer. The fast processing sacrifices some aspects of a full processing pipeline for speed. Both the lossy compression and the fast processing result in several artifacts in the images that are not typically observed in conventional transient pipeline analyses. The real-time data processing for the data collected on the dates used in this work includes using Swarp (Bertin et al. 2002) to align and stack images, SExtractor (Bertin & Arnouts 1996) to identify sources, and HOTPANTS (Becker 2015) to perform image subtractions. After performing image subtraction and source extraction on the differenced images, the Mary pipeline ) runs a machine learning algorithm on the potential candidates to minimize CCD artifacts. Aperture photometry of one full-width half-max was forced on the coordinates of sources that contained a residual following an image subtraction. The remaining candidates are then ranked based on their presence in The Second-Generation Guide Star Catalog (GSC-II, Lasker et al. 2008 ) and in previous nights of the DWF run, with higher rankings given to those sources that are present in neither GSC-II, nor previous DWF nights. Data analyzed in this manner will be referred to as "real-time" data. We note that the real-time processing is different for later runs. Secondly, the data are separately sent to the NOAO High-Performance Pipeline System to provide post-run, fully processed and well-calibrated data for later-time analyses. These data are used for fast transient detection after burst, fast transient searches cross-matched with other wavelengths, fast transients associated with slower-evolving events (e.g., supernova shock breakouts), slower-evolving events caught early by DWF, and other applications. For the data used here, sources were identified using SExtractor and the images are not stacked, nor image subtracted, however. Automatically calculated apertures were forced on the coordinates of all sources 1.5σ greater than the background. Magnitudes from SExtractor identified sources are calibrated against the SkyMapper Data Release 2 catalogue (Onken et al. 2019) . Data analyzed in this manner will be referred to as "post-run" data. For both real-time and post-run data processing methods, the light curves are generated for sources that have one or more detections at the same coordinates using aperture photometry on non-subtracted images; DWF targets are named using these coordinates. For each DWF source, a data point or upper limit is generated every ∼ 1 minute, unless the source location falls off the CCD, either in chip gaps or off the edge of DECam FoV as a result of small offsets in guiding, tracking, and hexapod tip-tile corrections, as a result of changing weather, moving to a new field, etc. There are a total of 5 DWF fields analyzed in this work, shown in Table 2 . There are 2 "real-time" data sets covering the CDFS Legacy and FRB171019 fields. There are 5 "post-run" data sets covering two epochs on the 4-hour and Antlia fields and one epoch on the FRB010724 field. The two 4-hour and Antlia epochs analyzed here are from two separate runs, spaced 11 months and 16 months apart, respectively; this second pointing can help establish if there is any recurrence or periodicity to transient behavior observed. The 4-hour field is one of the first fields observed by DWF. The first DWF run employed an observational routine with dithering. Analyzing the first run on the 4-hour field can determine how robust our algorithm is to dithered data; subsequent DWF runs have moved away from the dithered approach due to confounding issues discussed in Section 4.1. The Antlia field was chosen for analysis, in part because comparisons can be drawn between this work, and work done in Webb et al. (2020) . The FRB010724 data is from a dense field with 839,729 light curves generated over 5 days. The two "real-time" fields were chosen out of necessity; older "real-time" data was not stored for later analysis, and the COVID-19 pandemic, which has halted operations for many observing sites across the world, had precluded the acquisition of "real-time" data sets from Cerro Tololo for several months. The results of running the Fast Transient Note-DWF fields analyzed as a part of this study. Fields noted with a + are those with real-time data. Fields noted with a † are those that have been analyzed in Webb et al. (2020) . Finding (FTF) algorithm on the 5 data sets in Table 2 are presented in Section 4. The naming convention for the light curves presented in this paper are the survey name, DWF, followed by the right ascension (RA) and declination (DEC) in sexagesimal coordinates as follows: DWFRADEC. The challenges of "big-data" in astronomy have been well documented (e.g., Feigelson & Babu 2012; Zhang & Zhao 2015; Kremer et al. 2017; . As seen in Table 1 , the cadence of many optical transient surveys allows for longer processing times, but could limit the speed with which astronomers detect transient phenomena, with potential delays of several days between the start of an event and its detection. DWF offers a different approach to other optical surveys, and presents new challenges to analyze incoming data in "real-time". The real-time, fast processing by DWF (described in Section 2.1) is by no means ideal or optimal; the lossy compression adds artifacts and the fast data processing is much poorer than normal processing, creating additional artifacts and poorer astrometry and alignments, which can yield poorer subtractions. This sub-optimal fast processing is necessary, however, in order to identify events and trigger follow-up before sources fade. Detailed follow-up of these events, ideally spectra, can shed light on the early phases of SNe, GRB afterglows, potential FRB optical counterparts, and other transient phenomena. The challenges outlined here are unique to DWF due to the fast cadence and the opportunity for transient detection on minute time-scales. It is these challenges that motivate the work presented here. Despite its ubiquity, the use of image subtraction techniques to identify transient sources is wrought with challenges. The convolution of point spread functions between images can be challenging, if not impossible between different instruments and different seeing conditions; even when feasible, convolution can be computationally intensive. Source extraction codes (or the astronomers interpreting their outputs) can be fooled by sources that are not real, for example cosmic rays, cross-talk on images, or mis-aligned subtractions. As surveys search for transients to fainter magnitudes, they begin to hit the noise/source threshold and many detections are too ambiguous to accurately identify. The very large number of source detections in image subtracted frames and the inability to have humans analyze them all in a reasonable time frame (especially for fast transients), or do so with any solid accuracy to trigger followup observations, necessitates the use of machine learning frameworks to identify false-positives (Masci et al. 2017; Díaz et al. 2016; Duev et al. 2019) , further complicating the process and increasing computational demands. Furthermore, the machine-learning (ML) approach requires extensive training and large training set samples (typically in the hundreds of thousands of images), increasing the demands on human time and capital. The challenges associated with image subtraction have led to attempts to identify transient sources through direct image comparisons (Wardęga et al. 2020) and light curve analysis (e.g., Wevers et al. 2018; Liu et al. 2020; Webb et al. 2020 ). In addition, definitive source classification is hardly possible with image subtraction alone: transient characterization is confirmed through follow-up observations, including spectroscopic data; however, telescope time using sensitive spectroscopic instruments is very limited, and the time-window for observing fast-fading sources is narrower than the cadence of conventional transient surveys. If enough data of the source is rapidly available for a light curve to be made (e.g. within a few minutes of the first data acquisition), preliminary classification can be performed, using simple metrics such as its rise and/or decay rate and peak brightness. This early classification using light curves can inform astronomers about the resources they should allocate to these targets which are always limited; this kind of observation strategy will be crucial for LSST, and is the service that brokers will be performing for the community (Förster et al. 2020; Smith 2019; Patterson et al. 2019; Möller et al. 2021) . With better sampling, including light curves spanning multiple wavelength bands, a more precise classification of sources can be achieved (Bloom et al. 2012; Ball et al. 2006; Debosscher et al. 2007; Richards et al. 2011; Kim & Bailer-Jones 2016; Jamal & Bloom 2020 ), but progress on this front is still minimal due to the complexity of the data and classification algorithms. Given the obstacles inherent in using image subtraction techniques, and the necessity of a light curve analysis to classify peculiar transient events, we propose to identify these transient phenomena with a direct light curve analysis of the DWF data stream. The algorithm we describe can be used as an independent verification for candidates detected via other methods (e.g. image subtraction, machine learning algorithms); a flow chart of the FTF algorithm is shown in Figure 2 . Figure 2 . For a given DWF field, a total number of N sources are detected. A light curve (LC) will be generated for each source during post-run processing, or in real-time for those sources identified as potential transients from image subtraction (see Section 2.1 for a thorough discussion of the different data types). Each LC is fed into the algorithm described in Section 3.1 in a parallel-processed manner. The LCs are separated into n − (SW − 1) different sliding windows, where n is the number of data points in the LC and SW is the size of the sliding window. Each of these sliding windows are processed in parallel, fit linearly, and the sign of the slope is determined: positive, negative, or flat. The signs from the slopes of the individual sliding windows is recombined and the number of inflection points (IPs) in the LC is counted. Those LCs with fewer than 5 IPs are saved as potential transients. For each unique source observed during the DWF run, we separate the light curve data using a sliding window (SW), a technique common in financial time series analysis (Chou & Nguyen 2018; Chou & Truong 2019; Karathanasopoulos et al. 2016 ) as well as machine learning applications across several disciplines (Dietterich 2002; Kaneda & Mineno 2016; Selvin et al. 2017; Helwan & Uzun Ozsahin 2017) . The user can define the size of the sliding window parameter, but is limited by the number of data points contained within an individual light curve file (light curves may have missing points due to changing weather conditions, upper limits, or artifacts that prevent our photometric pipeline to accurately estimate the magnitude of the source). The source code for the FTF algorithm is publicly available 1 . Based on the typical field cadence and the number of points per light curve we can assess the best SW size. We emphasize here that, while we focus in this paper on finding known categories of fast evolving transients (e.g., GRB afterglows, kilonovae, etc.) the FTF algorithm can be easily customized for different or novel types of variable phenomena by changing the sliding window size (Figure 3 ) and the slope threshold ( Figure 4) ; searches for new types of fast-evolving transients is an important focus of DWF and we will us the FTF to pursue these targets in the future. In Figure 3 , we present the histograms of the number of data points present in each light curve for all of the fields and runs analyzed in this work, for both the "real-time" and "post-run" data. The data points for both the real-time and post-run are generated by forced photometry. In the real-time data, aperture photometry of one full-width half-max was forced on the coordinates of sources that contained a residual following an image subtraction. In the post-run data, automatically calculated apertures were forced on the coordinates of all sources 1.5σ greater than the background. The red dashed line in Figure 3 represents our choice of a SW=5; this avoids the predominance of noisy light curves (SW 5) and mitigates the risk of averaging over rapidly rising and falling light curves or flares with larger windows (SW 5). Over each sliding window, we compute a simple linear fit, g = αt + g 0 (where g is the observed magnitude, t is the time in minutes from the first observation, α is the temporal decay index, and g 0 is the intercept), and return the slope and its uncertainty. We do not perform our fit using uncertainties in the photometry for reasons of efficiency. A histogram of the slopes of each window from real-time and post-processed fields is plotted in the two graphs in Figure 4 . We find that the distributions of slopes is well-modeled with a Laplace Distribution, represented by the probability density function where µ is the mean (which, in the case of this function is equal to the median as well as the mode), the variance is 2b 2 , and the average absolute deviation is b. From the linear fits, we obtain the sign of each window slope as positive, negative, or flat; we consider a flat (non-changing) slope if α = 0 ± b, as shown by the red dashed lines in Figure 4 . The algorithm keeps track of the sign of the slope over each sliding window and notes a change in the sign of the slope as an inflection point (IP). Scanning over each sliding window, the algorithm tallies and records the number of IPs. For example, a typical fast previously unknown transient may have a number of IPs between 0 (straight rising or decay behavior) and 3 (e.g. a flare with one IP rising, one IP fading, and one IP flat). The aforementioned process is time consuming and CPU-intensive. Since our ultimate goal is to provide real-time identification of fast transients from the DWF data stream we implement a full parallelization of our python-based algorithm; each target can be run independently and in parallel during the DWF run. This parallelization enables the code to run over each DWF source in 1 minute, on par with the cadence of incoming data points. The efficiency of the code is important for real-time identification of transients, especially deployed on multi-core CPUs, like the supercomputer at Swinburne University, where the optical data from DWF runs is analyzed. For each light curve we calculate the number of IPs and then we group all objects and light curves with the same number of IPs. For this work, we focus on light curves that have four or fewer IPs within the typical DWF light curve (∼1 hour). • Light curves with zero IPs, but that are monotonically increasing or decreasing could be longer evolving transients: Cepheids, RR Lyrae, or SNe, for example. These could also be the beginning a of fast evolving transient at the end of DWF observation. • Light curves with one IP might be catching the start of the rise or fall of a transient evolving on minutes to days time scales. • Light curves with two or three IPs may contain peaks or dips spanning the entire DWF time on the field (typically 1-4 hours). • Four IP light curves could point towards more complex behavior that goes through several phases over the course of the DWF observation. It is important to note that transient phenomena may be occurring before the DWF run began, or continue after data acquisition has stopped. Therefore, a burst like event might only have two or three IPs, as its light curve might be shifted towards the beginning or end of a run in such a way that some parts of the curve are not sampled. For the FTF algorithm, we define "fast transient" candidates as those with fewer than four inflections points, and with at least one sliding window with a slope greater than a user specified slope; in this iteration of the algorithm, that specific slope is defined by statistical measures as defined in Section 3.2, but could be set manually by the user if searching for a specific type of source with a known range of slopes. The code could be modified to include sources with more than four inflection points points towards a variable source that could be of interest to other areas of astronomy. Using this algorithm, the first potential transients can be reported just after the first five minutes of observation by DWF; thereafter the number of IPs associated with each light curve will be updated every minute, as the sliding window shifts over by one data point. A noted inflection change with a corresponding positive detection in the image subtraction pipeline provides good evidence to trigger imaging and spectroscopic follow-up. In Figure 5 we show how the FTF algorithm works on a sample light curve, using a flaring star first detected in Webb et al. (2020) as an example. The light curve for this flaring source, DWF102955.559-360035.170 , is plotted in the left panel of Figure 5 . The right panel in Figure 5 shows the slope derived from each sliding window as function of time. We can clearly see that the flare in the light curve and the relative inflection points enable the identification of a change in brightness beyond the typical brightness. While this information may be used to trigger follow up observations, the subsequent data demonstrate that there are more inflection points and therefore the source is not a fast transient as classified in Section 3.3. The dashed red lines in the figure represent the same thresholds identified in the histograms presented in Figure 4 ; users can set a different threshold to identify different transients of interest, as shown by the blue dashed lines. In this section we present the outcome of our FTF algorithm and the implication on 1) detectability of fast transients of different natures and the rate of detection of these objects compared to other surveys and 2) the required effort for spectroscopic follow-up for secure classification. In summary: • On a single night of observation on a single field we obtained on average ∼ 50, 000 real-time light curves, and ∼ 340, 000 post-processed light curves. • Feeding these 50, 000 real-time and 340, 000 post-processed light curves into the FTF algorithm, we detect, on average 150 (∼ 3%) and 3, 000 ( 1%) potential fast transients respectively. • Checking the science frames of the potential fast transients for artifacts and other non-astronomical sources, we obtained on average, 1 statistically significant fast transient per field in the real-time data, and 13 statistically significant fast transients per field in the post-processed data. • Based on light curve fits 69 sources can be classified as "potential transients" under our definition in Section 3.1 from the fields described in Table 2 . • In the event of a fast transient identification, FTF would allow a latency time of just 5-15 minutes for multiwavelength and spectroscopic follow-up observations. A detailed break-down of the results over each field can be found in Table 3 . We note that the 69 sources identified as "potential transients" are fast evolving sources that would require follow-up, specifically spectroscopic follow-up to determine if these sources are indeed transients or other variable sources. Note-The FTF algorithm identified ≈ 1% of the light curves in the fields studied as potential transients, reducing the number of real-time light curves that require human inspection by two orders of magnitude. From those light curves identified by the algorithm, about 0.5% are identified by a human observer as potentially real astrophysical phenomena, after rejecting sources with obvious non-astrophysical explanations. Fields with an * denote fields where real-time data was analyzed. Once a list of candidates is generated using the FTF algorithm (within the first 5-10 minutes of the run, and then every minute thereafter), as shown in Section 3.4, light curves are vetted by our team; for those light curves that passed human inspection, image cutouts for the source's location on the sky were visually inspected to exclude the presence of artifacts that survived our processing pipeline (e.g., cosmic rays, bad pixels, bad rows/columns, etc.). For the purposes of this paper, a positive detection is defined as one where the source is either a known variable, a DWF variable detected by other methods (see for example, Webb et al. 2020) , or a newly discovered candidate that passes a visual inspection of the images associated with the light curves. To confirm known variables, we checked the coordinates of our candidates against known variable source catalogs such as the General Catalog of Variable Stars (Samus' et al. 2017 ) and the International Variable Star Index (Watson et al. 2017) . It is important to note that the real-time light curves will only exist for those sources that were candidates identified via image subtraction as a part of the Mary pipeline analysis; in contrast, the light curves from the post-run processing with the NOAO pipeline encompasses all sources that were detected during the run. The sources in both real-time and post-run data sets include point sources and extended sources. The linear fits plotted in the subsequent figures (Figures 6-10 ) are meant to give an idea of the general trends in the lights curves, and are not the slopes associated with the sliding windows, as shown in Section 3.4, nor are they necessarily the best fit for the data. The light curve of DWF011805.113-751125.458, plotted in the left panel of Figure 6 is a known RR Lyrae source, called BG Tuc (Hoffmeister 1963; Geßner 1981) . The light curves of DWF011805.113-751125.458 for each night of the DWF observing run are plotted in the right panel of Figure 6 ; this figure shows the variability of the object over long time scales. The behavior of DWF102920.187-355700.211 on both the night of 180607, plotted in the upper left and panel of Figure 7 , and 1800608, plotted in the upper right panel of Figure 7 , were identified by the FTF algorithm as potential transient phenomena. The first night of data shows a source decreasing from g ∼ 17.6 to g ∼ 18.4 in 30 minutes of observation. The data from the second night shows a source with a baseline magnitude of g ∼ 20 that dips dramatically, twice: once by 2 magnitudes and a second time by 1.5 magnitudes, each occurring in the space of a few minutes. Visual inspection of the first night of data revealed no signs of contamination by non-astrophysical sources. The analysis of the data from the second night, shown in the bottom panel of Figure 7 , reveals that DWF102920.187-355700.211 and the dimmer stars in the vicinity all become very faint; clouds passing over this region of the sky would account for the apparent dimming of the source during the second night, if the clouds passed over this region of the sky and not the region containing reference stars for the field. We believe that DWF102920.187-355700.211 was displaying some genuine transient phenomena on the first night of observation before reaching a quiescent phase in the second and third nights. In Figure 8 , we present two light curves that showcase when the FTF can identify a fast evolving transient in the DWF "real-time" data stream, and how quickly astronomers can trigger other resources. The left panel shows the source DWF040623.456-550041.171 around g ∼ 17 before dropping by 0.7 magnitudes over a 10 minute period. For a source like this, the FTF algorithm would alert astronomers within the first few data points (within 10 minutes in the case) after the light curve deviates from a flat position. The right panel of Figure 8 shows the light curve of DWF102613.233-350150.332 rising by about 0.8 magnitudes in 40 minutes, before undergoing a seemingly exponential decay over the remainder of the observations. This source would be identified as a potential transient after the first 5 data points, due to the steep nature of its increasing brightness. Furthermore, the FTF algorithm would identify an inflection point within 10 minutes of the object's drop in brightness, notifying astronomers of a change in the behavior of this object. In this section we present a sample of light curves that were identified as possible transients by the FTF algorithm, but after further analysis, were determined to be bogus. The most common type of light curve that confounded the FTF algorithm were those involving an astronomical source interacting with the edge of one of the 62 science CCDs that make up the DECam detector (Honscheid & DePoy 2008) , pictured in Figure 1 under the label DES (Dark Energy Survey); the number of CCDs increases the chance for edge interactions. As the source moves onto or off of a CCD the light curve can show a peak or a dip not unlike that mimicking a fast rising or fading transient. This effect was exacerbated by early DWF observational strategies employing dithering routine (e.g., the first run on the 4-hour field analyzed in this paper); dithering patterns are no longer favored by DWF, in part for this reason. This issue can be remedied by ignoring data collected near the edge of a detector. This information is not always available in cataloged data sets, but can be easily identified using software analyzing the dimensions of the science image (i.e. is it square?) and by machine learning algorithms. In Figure 9 , we present an example of an astronomical source DWF040903.800-554603.567 appearing to exhibit transient behavior. In the left panel of Figure 9 , the light curve dims by > 0.1 magnitude in one minute before continuing to decay over the next five minutes. Upon visual inspection of the fits images (in the right panel of Figure 9 , it is clear that the telescope shifted slightly, placing the source on the edge of the detector, and afterwards, the sources is slowly moving out of frame. In Figure 10 , we present a light curve that was misidentified as a transient due to edge effects, but for a slightly different reason than that shown in Figure 9 . In the left panel of Figure 10 , we present the light curve of DWF040748.870-541956.717 that appears at a magnitude of g ≈ 19.7 out of a 5σ background upper limit of g < 22.5. The source then proceeds to decay by 0.6 magnitudes over the course of about 7 minutes. Upon inspection of the images, shown in the There does not appear to be contamination from non-astrophysical effects in the image cutouts from this night of data, so we identify this source as real. Top Right: The light curves for the other nights for which this source is observed. Overall, the source is declining form g ∼ 17.5 in the first night before down to an almost constant g ∼ 20 magnitude in the second and third night. There are however dips of about 2 magnitudes present during the second night. Bottom: The image cutouts for the second night of data from this source are presented. As can be seen in one of the middle rows, the source and those nearby all seem to fade, indicative of clouds that may not have been visible to the astronomers on the ground. right panel of Figure 10 , a bright source is shown moving out of frame. We suspect that the cause of the appearance of the source at g ≈ 19.7 more than 50 minutes after the observation of the field began is the following: 1) a bright star was present in the field at coordinates slightly offset from those of DWF040748.870-541956.717, 2) this bright source began to move out of frame, 3) eventually the centroid of the star is out of frame, but light from the star is still being detected; the NOAO pipeline then identifies a new source using coordinates in frame 4) as the source continues to move out of frame, the brightness of the object continues to decrease. The Deeper, Wider, Faster program is unique in terms of its depth (g ∼ 23 per image) and its short cadence (∼ 1 minute) when compared to other transient surveys, occupying a parameter space with a distinct lack of coverage (e.g., Andreoni et al. 2020b , Figure 6 ). In addition to its depth and cadence, DWF offers a new way to explore transient phenomena due to the simultaneous wide-field multi-wavelength observations performed across the entire electromagnetic spectrum. Identification of transient phenomena in transient surveys has heavily relied on the imperfect science of image subtraction. Image subtraction is necessary in some cases, such as the identification of a transient within a bright host galaxy. Identification of transients via light curve analysis, can be done independently from image subtraction, or in concert with image subtraction techniques. Light curve analysis can identify variable objects with small changes in brightness that might be missed in an image subtraction, for example exoplanet transits. In addition, the rudimentary classification of transient phenomena requires analysis of the light curves of these objects, with more refined classifications relying heavily on a spectral analysis of the object. In this work, we present the Fast Transient Finding (FTF) algorithm, capable of identifying transient phenomena both independently of image subtraction (e.g., "Post-run Data" in Section 4.1, and in tandem with an image subtraction algorithm (e.g., "Real-Time Data" in Section 4.1 and the * fields in Table 3 ), on the DWF data stream light curves. We focused on identifying fast transients (e.g., explosive phenomena) in this paper, but also demonstrate how the FTF algorithm can be customized to find other kinds transients and variables. This type of algorithm occupies a unique space within the transient detection landscape. Most currently operating optical surveys do not detect intra-night variability, and as such, miss the opportunity to alert the community for possible follow-up on fast evolving transients such as GRB and FRB counterparts. This source seems to appear at g ∼ 19.7 before rapidly fading by ∼0.6 magnitudes in 7 minutes. Right: Image cutouts centered on the position of DWF040748.870-541956.717 on the sky. Upon further inspection, it appears that a bright source is on the edge of the CCD; as it moves further off the edge, a small section of the star is still visible. This small bit of flux is assigned to coordinates still in the field of view of this CCD, and a new source is generated in the catalog. As the source continues to move off the CCD, the measured flux decreases. Figure 11 . Current data flow for incorporation of the FTF algorithm into the DWF pipeline. The green arrows represent the new data flows described in this paper. Data from DECam are compressed (Vohl et al. 2017 ) and sent to Australia for analysis. The images are processed by the Mary Pipeline ) and image subtractions are performed. Light curves of sources flagged through image subtraction are fed to the FTF algorithm. The light curves are analyzed and the results are fed to the Data Visualization (e.g., Meade et al. 2017) , or used directly to trigger follow-up observations. We see the work in this paper as the first step towards implementation of real time transient classification. We will first identify potential transients using the FTF algorithm. Next we will combine the multi-wavelength data sets obtained by the DWF for sources of interest. We will either extract features from this combined multi-frequency data set or run a deep learning classification algorithm in real-time (Cucchiara et al. in preparation) . The FTF algorithm will be incorporated into the DWF pipeline and deployed on the next DWF run, as shown in Figure 11 . In its first iterations the algorithm will be working off of the light curves generated by the image subtractions performed by the Mary pipeline . When a source is first identified as a candidate by image subtraction, a light curve will begin to be populated for that source. If the slope of the light curve of that source is above some threshold, which we can select manually for specific sources (very high for flare stars or slightly lower for slower evolving transients) or automatically using a statistical measure (e.g. Figure 4 ), then that source will be identified as a potential fast transient candidate. Candidates from the image subtraction are provided to human observers using interactive visualization tools. We will give priority to sources that are flagged as potential transients by the FTF algorithm, as these sources are both image-subtracted candidates and FTF candidates. As more data is generated, sources with more inflection points will drop out of the FTF candidate list. We can trigger followup of image subtracted and FTF candidates to classify these sources in real-time (e.g. with detailed spectra). Due to the general nature of the FTF algorithm, we will look to apply it to other data sets, both propreitary and publicly available. In particular, some authors of this paper are members of the Rubin Science Collaboration or are Rubin Observatory Data Preview 0.1 (DP0 2 ) Delegates, and have early access to the Rubin Science Platform. We plan to test the FTF algorithm on the DP0 data set and refine our algorithm before Rubin Observatory comes fully online in 2023. RS and AC are supported by the NSF Grant AST# 1831682. This project used data obtained with the Dark Energy Camera (DECam), which was constructed by the Dark Energy Survey (DES) collaboration. Funding for the DES Projects has been provided by the US Department of Energy, the US National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute for Cosmological Physics at the University of Chicago, Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, Financiadora de Estudos e Projetos, Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Científico e Tecnológico and the Ministério da Ciência, Tecnologia e Inovação, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey. Southern Horizons in Time-Domain Astronomy HOTPANTS: High Order Transform of PSF ANd Template Subtraction Astronomical Society of the Pacific Conference Series Soft Computing ZTF20acozryr/AT2020yxz: Zwicky Transient Facility discovery of a fast optical transient with no associated GRB Structural, Syntactic, and Statistical Pattern Recognition Veroeffentlichungen der Sternwarte Sonneberg Robotic Telescope, Student Research and Education Proceedings American Astronomical Society Meeting Abstracts #231 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI) The Extragalactic Explosive Universe: the New Era of Transient Surveys and Data-Driven Discovery IPN triangulation of GRB 201103B (consistent with ZTF20acozryr/AT2020yxz) Astronomical Society of the Pacific Conference Series Astronomical Society of the Pacific Conference Series VizieR Online Data Catalog 2015 IEEE International Conference on Big Data (Big Data)